Jan 30 13:54:14.487181 kernel: microcode: updated early: 0xde -> 0x100, date = 2024-02-05 Jan 30 13:54:14.487196 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:54:14.487203 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:54:14.487208 kernel: BIOS-provided physical RAM map: Jan 30 13:54:14.487212 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 13:54:14.487216 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 13:54:14.487221 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 13:54:14.487225 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 13:54:14.487230 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 13:54:14.487234 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfb1fff] usable Jan 30 13:54:14.487238 kernel: BIOS-e820: [mem 0x000000006dfb2000-0x000000006dfb2fff] ACPI NVS Jan 30 13:54:14.487242 kernel: BIOS-e820: [mem 0x000000006dfb3000-0x000000006dfb3fff] reserved Jan 30 13:54:14.487247 kernel: BIOS-e820: [mem 0x000000006dfb4000-0x0000000077fc4fff] usable Jan 30 13:54:14.487251 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Jan 30 13:54:14.487256 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Jan 30 13:54:14.487262 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Jan 30 13:54:14.487266 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Jan 30 13:54:14.487271 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Jan 30 13:54:14.487276 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Jan 30 13:54:14.487280 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:54:14.487285 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 13:54:14.487290 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 13:54:14.487294 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:54:14.487299 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 13:54:14.487303 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Jan 30 13:54:14.487309 kernel: NX (Execute Disable) protection: active Jan 30 13:54:14.487314 kernel: APIC: Static calls initialized Jan 30 13:54:14.487318 kernel: SMBIOS 3.2.1 present. Jan 30 13:54:14.487323 kernel: DMI: Supermicro X11SCH-F/X11SCH-F, BIOS 1.5 11/17/2020 Jan 30 13:54:14.487328 kernel: tsc: Detected 3400.000 MHz processor Jan 30 13:54:14.487332 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 13:54:14.487337 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:54:14.487342 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:54:14.487347 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Jan 30 13:54:14.487352 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 13:54:14.487358 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:54:14.487363 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Jan 30 13:54:14.487367 kernel: Using GB pages for direct mapping Jan 30 13:54:14.487372 kernel: ACPI: Early table checksum verification disabled Jan 30 13:54:14.487377 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 13:54:14.487384 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 13:54:14.487389 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Jan 30 13:54:14.487395 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 13:54:14.487400 kernel: ACPI: FACS 0x0000000079662F80 000040 Jan 30 13:54:14.487405 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Jan 30 13:54:14.487410 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Jan 30 13:54:14.487415 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 13:54:14.487420 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 13:54:14.487428 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 13:54:14.487452 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 13:54:14.487457 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 13:54:14.487462 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 13:54:14.487467 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487486 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 13:54:14.487491 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 13:54:14.487496 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487501 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487506 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 13:54:14.487512 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 13:54:14.487517 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487522 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487527 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 13:54:14.487532 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:54:14.487537 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 13:54:14.487542 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 13:54:14.487547 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 13:54:14.487555 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xf4m 01072009 AMI 00010013) Jan 30 13:54:14.487561 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 13:54:14.487566 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 13:54:14.487571 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 13:54:14.487576 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 13:54:14.487581 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 13:54:14.487586 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Jan 30 13:54:14.487591 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Jan 30 13:54:14.487596 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Jan 30 13:54:14.487602 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Jan 30 13:54:14.487607 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Jan 30 13:54:14.487612 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Jan 30 13:54:14.487617 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Jan 30 13:54:14.487622 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Jan 30 13:54:14.487627 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Jan 30 13:54:14.487631 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Jan 30 13:54:14.487637 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Jan 30 13:54:14.487641 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Jan 30 13:54:14.487647 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Jan 30 13:54:14.487652 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Jan 30 13:54:14.487657 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Jan 30 13:54:14.487662 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Jan 30 13:54:14.487667 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Jan 30 13:54:14.487672 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Jan 30 13:54:14.487677 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Jan 30 13:54:14.487682 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Jan 30 13:54:14.487687 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Jan 30 13:54:14.487692 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Jan 30 13:54:14.487698 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Jan 30 13:54:14.487703 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Jan 30 13:54:14.487708 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Jan 30 13:54:14.487713 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Jan 30 13:54:14.487718 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Jan 30 13:54:14.487723 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Jan 30 13:54:14.487728 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Jan 30 13:54:14.487733 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Jan 30 13:54:14.487737 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Jan 30 13:54:14.487743 kernel: No NUMA configuration found Jan 30 13:54:14.487748 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Jan 30 13:54:14.487753 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Jan 30 13:54:14.487758 kernel: Zone ranges: Jan 30 13:54:14.487763 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:54:14.487768 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:54:14.487773 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Jan 30 13:54:14.487778 kernel: Movable zone start for each node Jan 30 13:54:14.487783 kernel: Early memory node ranges Jan 30 13:54:14.487789 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 13:54:14.487794 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 13:54:14.487799 kernel: node 0: [mem 0x0000000040400000-0x000000006dfb1fff] Jan 30 13:54:14.487804 kernel: node 0: [mem 0x000000006dfb4000-0x0000000077fc4fff] Jan 30 13:54:14.487809 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Jan 30 13:54:14.487815 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Jan 30 13:54:14.487823 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Jan 30 13:54:14.487829 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Jan 30 13:54:14.487835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:54:14.487840 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 13:54:14.487846 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 13:54:14.487852 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 13:54:14.487857 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 30 13:54:14.487862 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 30 13:54:14.487868 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Jan 30 13:54:14.487873 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Jan 30 13:54:14.487878 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 13:54:14.487885 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:54:14.487890 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:54:14.487896 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:54:14.487901 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:54:14.487906 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:54:14.487911 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:54:14.487917 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:54:14.487922 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:54:14.487927 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:54:14.487933 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:54:14.487939 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:54:14.487944 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:54:14.487949 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:54:14.487954 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:54:14.487960 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:54:14.487965 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:54:14.487970 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 13:54:14.487975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:54:14.487981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:54:14.487987 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:54:14.487992 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:54:14.487997 kernel: TSC deadline timer available Jan 30 13:54:14.488003 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 13:54:14.488008 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Jan 30 13:54:14.488013 kernel: Booting paravirtualized kernel on bare hardware Jan 30 13:54:14.488019 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:54:14.488024 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 13:54:14.488031 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:54:14.488036 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:54:14.488041 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 13:54:14.488047 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:54:14.488053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:54:14.488058 kernel: random: crng init done Jan 30 13:54:14.488063 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 13:54:14.488068 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 13:54:14.488075 kernel: Fallback order for Node 0: 0 Jan 30 13:54:14.488080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Jan 30 13:54:14.488086 kernel: Policy zone: Normal Jan 30 13:54:14.488091 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:54:14.488096 kernel: software IO TLB: area num 16. Jan 30 13:54:14.488102 kernel: Memory: 32677260K/33411988K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734468K reserved, 0K cma-reserved) Jan 30 13:54:14.488107 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 13:54:14.488112 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:54:14.488118 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:54:14.488124 kernel: Dynamic Preempt: voluntary Jan 30 13:54:14.488130 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:54:14.488135 kernel: rcu: RCU event tracing is enabled. Jan 30 13:54:14.488140 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 13:54:14.488146 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:54:14.488151 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:54:14.488156 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:54:14.488162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:54:14.488167 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 13:54:14.488172 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 13:54:14.488179 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:54:14.488184 kernel: Console: colour VGA+ 80x25 Jan 30 13:54:14.488190 kernel: printk: console [tty0] enabled Jan 30 13:54:14.488195 kernel: printk: console [ttyS1] enabled Jan 30 13:54:14.488200 kernel: ACPI: Core revision 20230628 Jan 30 13:54:14.488206 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 30 13:54:14.488211 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:54:14.488216 kernel: DMAR: Host address width 39 Jan 30 13:54:14.488222 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 30 13:54:14.488228 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 30 13:54:14.488233 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 13:54:14.488239 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 13:54:14.488244 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Jan 30 13:54:14.488249 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Jan 30 13:54:14.488255 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 30 13:54:14.488260 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 13:54:14.488266 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 13:54:14.488271 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 13:54:14.488277 kernel: x2apic enabled Jan 30 13:54:14.488283 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 13:54:14.488288 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:54:14.488293 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 13:54:14.488299 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 13:54:14.488304 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 13:54:14.488310 kernel: process: using mwait in idle threads Jan 30 13:54:14.488315 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:54:14.488320 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:54:14.488327 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:54:14.488332 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:54:14.488338 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:54:14.488343 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:54:14.488348 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:54:14.488354 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:54:14.488359 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:54:14.488364 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:54:14.488370 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:54:14.488376 kernel: TAA: Mitigation: TSX disabled Jan 30 13:54:14.488381 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 13:54:14.488387 kernel: SRBDS: Mitigation: Microcode Jan 30 13:54:14.488392 kernel: GDS: Mitigation: Microcode Jan 30 13:54:14.488397 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:54:14.488403 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:54:14.488408 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:54:14.488413 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:54:14.488419 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:54:14.488432 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:54:14.488437 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:54:14.488443 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:54:14.488448 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 13:54:14.488453 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:54:14.488459 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:54:14.488464 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:54:14.488469 kernel: landlock: Up and running. Jan 30 13:54:14.488475 kernel: SELinux: Initializing. Jan 30 13:54:14.488481 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.488486 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.488492 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:54:14.488497 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:54:14.488503 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:54:14.488508 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:54:14.488514 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 13:54:14.488519 kernel: ... version: 4 Jan 30 13:54:14.488525 kernel: ... bit width: 48 Jan 30 13:54:14.488531 kernel: ... generic registers: 4 Jan 30 13:54:14.488536 kernel: ... value mask: 0000ffffffffffff Jan 30 13:54:14.488541 kernel: ... max period: 00007fffffffffff Jan 30 13:54:14.488546 kernel: ... fixed-purpose events: 3 Jan 30 13:54:14.488552 kernel: ... event mask: 000000070000000f Jan 30 13:54:14.488557 kernel: signal: max sigframe size: 2032 Jan 30 13:54:14.488562 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 13:54:14.488568 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:54:14.488574 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:54:14.488579 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 13:54:14.488585 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:54:14.488590 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:54:14.488596 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 13:54:14.488601 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:54:14.488607 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 13:54:14.488612 kernel: smpboot: Max logical packages: 1 Jan 30 13:54:14.488617 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 13:54:14.488624 kernel: devtmpfs: initialized Jan 30 13:54:14.488629 kernel: x86/mm: Memory block size: 128MB Jan 30 13:54:14.488634 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfb2000-0x6dfb2fff] (4096 bytes) Jan 30 13:54:14.488640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Jan 30 13:54:14.488645 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:54:14.488651 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 13:54:14.488656 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:54:14.488661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:54:14.488667 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:54:14.488673 kernel: audit: type=2000 audit(1738245249.131:1): state=initialized audit_enabled=0 res=1 Jan 30 13:54:14.488678 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:54:14.488684 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:54:14.488689 kernel: cpuidle: using governor menu Jan 30 13:54:14.488694 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:54:14.488700 kernel: dca service started, version 1.12.1 Jan 30 13:54:14.488705 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:54:14.488710 kernel: PCI: Using configuration type 1 for base access Jan 30 13:54:14.488715 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 13:54:14.488722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:54:14.488727 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:54:14.488732 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:54:14.488738 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:54:14.488743 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:54:14.488748 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:54:14.488754 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:54:14.488759 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:54:14.488764 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:54:14.488771 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 13:54:14.488776 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488782 kernel: ACPI: SSDT 0xFFFF9879011A0800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 13:54:14.488787 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488792 kernel: ACPI: SSDT 0xFFFF987901198800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 13:54:14.488798 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488803 kernel: ACPI: SSDT 0xFFFF987901187C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 13:54:14.488808 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488814 kernel: ACPI: SSDT 0xFFFF987901199000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 13:54:14.488820 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488825 kernel: ACPI: SSDT 0xFFFF9879011AD000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 13:54:14.488831 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488836 kernel: ACPI: SSDT 0xFFFF98790226B800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 13:54:14.488841 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 13:54:14.488847 kernel: ACPI: Interpreter enabled Jan 30 13:54:14.488852 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:54:14.488857 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:54:14.488863 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 13:54:14.488868 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 13:54:14.488874 kernel: HEST: Table parsing has been initialized. Jan 30 13:54:14.488880 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 13:54:14.488885 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:54:14.488890 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:54:14.488896 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 13:54:14.488901 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 13:54:14.488907 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 13:54:14.488912 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 13:54:14.488917 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 13:54:14.488924 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 13:54:14.488929 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 30 13:54:14.488934 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 13:54:14.488940 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 13:54:14.488945 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 13:54:14.488951 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 13:54:14.488956 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 13:54:14.488961 kernel: ACPI: \PIN_: New power resource Jan 30 13:54:14.488967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 13:54:14.489041 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:54:14.489094 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 13:54:14.489141 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 13:54:14.489149 kernel: PCI host bridge to bus 0000:00 Jan 30 13:54:14.489200 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:54:14.489243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:54:14.489289 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:54:14.489331 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Jan 30 13:54:14.489373 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 13:54:14.489414 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 13:54:14.489474 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 13:54:14.489529 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 13:54:14.489582 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.489636 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 30 13:54:14.489686 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.489737 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 30 13:54:14.489785 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Jan 30 13:54:14.489833 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 30 13:54:14.489881 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 30 13:54:14.489937 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 13:54:14.489986 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Jan 30 13:54:14.490037 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 13:54:14.490085 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Jan 30 13:54:14.490138 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 13:54:14.490188 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Jan 30 13:54:14.490244 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 13:54:14.490295 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 13:54:14.490346 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Jan 30 13:54:14.490393 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Jan 30 13:54:14.490448 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 13:54:14.490496 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:54:14.490551 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 13:54:14.490599 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:54:14.490654 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 13:54:14.490701 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Jan 30 13:54:14.490748 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 13:54:14.490799 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 13:54:14.490847 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Jan 30 13:54:14.490898 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 13:54:14.490951 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 13:54:14.490999 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Jan 30 13:54:14.491046 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 13:54:14.491100 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 13:54:14.491148 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Jan 30 13:54:14.491195 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Jan 30 13:54:14.491242 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 30 13:54:14.491290 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 30 13:54:14.491336 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 30 13:54:14.491384 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Jan 30 13:54:14.491437 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 13:54:14.491491 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 13:54:14.491539 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491593 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 13:54:14.491641 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491696 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 13:54:14.491748 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491800 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 13:54:14.491850 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491902 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 30 13:54:14.491951 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.492005 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 13:54:14.492056 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:54:14.492108 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 13:54:14.492161 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 13:54:14.492208 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Jan 30 13:54:14.492257 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 13:54:14.492308 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 13:54:14.492359 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 13:54:14.492407 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:54:14.492466 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 13:54:14.492517 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 13:54:14.492566 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Jan 30 13:54:14.492615 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 30 13:54:14.492664 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:54:14.492714 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:54:14.492769 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 13:54:14.492820 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 13:54:14.492868 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Jan 30 13:54:14.492917 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 30 13:54:14.492967 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:54:14.493015 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:54:14.493068 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 13:54:14.493116 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 30 13:54:14.493163 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:54:14.493211 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 13:54:14.493264 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 13:54:14.493313 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:54:14.493363 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Jan 30 13:54:14.493411 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 13:54:14.493499 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Jan 30 13:54:14.493548 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.493597 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 13:54:14.493645 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:54:14.493693 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 30 13:54:14.493751 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 30 13:54:14.493800 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:54:14.493852 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Jan 30 13:54:14.493900 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 13:54:14.493949 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Jan 30 13:54:14.493998 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.494046 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 13:54:14.494095 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:54:14.494142 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 30 13:54:14.494190 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 13:54:14.494245 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 13:54:14.494296 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 30 13:54:14.494345 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 13:54:14.494394 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:54:14.494446 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 13:54:14.494493 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.494542 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.494608 kernel: pci_bus 0000:08: extended config space not accessible Jan 30 13:54:14.494667 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 13:54:14.494719 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Jan 30 13:54:14.494770 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Jan 30 13:54:14.494823 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 13:54:14.494873 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:54:14.494925 kernel: pci 0000:08:00.0: supports D1 D2 Jan 30 13:54:14.494979 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:54:14.495030 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 13:54:14.495078 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.495130 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.495138 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 13:54:14.495144 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 13:54:14.495150 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 13:54:14.495156 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 13:54:14.495163 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 13:54:14.495169 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 13:54:14.495174 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 13:54:14.495180 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 13:54:14.495186 kernel: iommu: Default domain type: Translated Jan 30 13:54:14.495192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:54:14.495197 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:54:14.495203 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:54:14.495209 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 13:54:14.495215 kernel: e820: reserve RAM buffer [mem 0x6dfb2000-0x6fffffff] Jan 30 13:54:14.495221 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Jan 30 13:54:14.495226 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Jan 30 13:54:14.495232 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Jan 30 13:54:14.495237 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Jan 30 13:54:14.495288 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 30 13:54:14.495339 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 30 13:54:14.495390 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:54:14.495399 kernel: vgaarb: loaded Jan 30 13:54:14.495407 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:54:14.495413 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 30 13:54:14.495418 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:54:14.495458 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:54:14.495464 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:54:14.495470 kernel: pnp: PnP ACPI init Jan 30 13:54:14.495523 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 13:54:14.495571 kernel: pnp 00:02: [dma 0 disabled] Jan 30 13:54:14.495622 kernel: pnp 00:03: [dma 0 disabled] Jan 30 13:54:14.495671 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 13:54:14.495715 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 13:54:14.495764 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 13:54:14.495810 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 13:54:14.495855 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 13:54:14.495900 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 13:54:14.495944 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 13:54:14.495987 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 13:54:14.496033 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 13:54:14.496078 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 13:54:14.496122 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 13:54:14.496168 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 13:54:14.496215 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 13:54:14.496259 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 13:54:14.496301 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 13:54:14.496345 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 13:54:14.496387 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 13:54:14.496434 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 13:54:14.496519 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 13:54:14.496529 kernel: pnp: PnP ACPI: found 10 devices Jan 30 13:54:14.496535 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:54:14.496541 kernel: NET: Registered PF_INET protocol family Jan 30 13:54:14.496547 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:54:14.496553 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 13:54:14.496559 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:54:14.496565 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:54:14.496570 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:54:14.496576 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 13:54:14.496583 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.496589 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.496594 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:54:14.496600 kernel: NET: Registered PF_XDP protocol family Jan 30 13:54:14.496649 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Jan 30 13:54:14.496698 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Jan 30 13:54:14.496747 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Jan 30 13:54:14.496796 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:54:14.496849 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:54:14.496898 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:54:14.496949 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:54:14.496999 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:54:14.497050 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 13:54:14.497099 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 30 13:54:14.497148 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:54:14.497197 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 13:54:14.497244 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 13:54:14.497292 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:54:14.497340 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 30 13:54:14.497388 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 13:54:14.497439 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:54:14.497490 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 30 13:54:14.497538 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 13:54:14.497588 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 13:54:14.497637 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.497685 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.497733 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 13:54:14.497780 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.497828 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.497871 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 13:54:14.497917 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:54:14.497959 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:54:14.498002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:54:14.498044 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Jan 30 13:54:14.498085 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 13:54:14.498133 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Jan 30 13:54:14.498178 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:54:14.498228 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 30 13:54:14.498273 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Jan 30 13:54:14.498323 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 13:54:14.498368 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Jan 30 13:54:14.498415 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 13:54:14.498499 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Jan 30 13:54:14.498549 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 30 13:54:14.498594 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Jan 30 13:54:14.498602 kernel: PCI: CLS 64 bytes, default 64 Jan 30 13:54:14.498608 kernel: DMAR: No ATSR found Jan 30 13:54:14.498614 kernel: DMAR: No SATC found Jan 30 13:54:14.498620 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 30 13:54:14.498625 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 30 13:54:14.498631 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 30 13:54:14.498637 kernel: DMAR: IOMMU feature pasid inconsistent Jan 30 13:54:14.498644 kernel: DMAR: IOMMU feature eafs inconsistent Jan 30 13:54:14.498650 kernel: DMAR: IOMMU feature prs inconsistent Jan 30 13:54:14.498656 kernel: DMAR: IOMMU feature nest inconsistent Jan 30 13:54:14.498661 kernel: DMAR: IOMMU feature mts inconsistent Jan 30 13:54:14.498667 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 30 13:54:14.498673 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 30 13:54:14.498678 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 13:54:14.498684 kernel: DMAR: dmar1: Using Queued invalidation Jan 30 13:54:14.498732 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 30 13:54:14.498784 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 30 13:54:14.498834 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 30 13:54:14.498882 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 30 13:54:14.498930 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 30 13:54:14.498977 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 30 13:54:14.499025 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 30 13:54:14.499072 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 30 13:54:14.499119 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 30 13:54:14.499167 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 30 13:54:14.499215 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 30 13:54:14.499262 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 30 13:54:14.499310 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 30 13:54:14.499358 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 30 13:54:14.499405 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 30 13:54:14.499456 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 30 13:54:14.499504 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 30 13:54:14.499555 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 30 13:54:14.499602 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 30 13:54:14.499650 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 30 13:54:14.499697 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 30 13:54:14.499745 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 30 13:54:14.499792 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 30 13:54:14.499841 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 30 13:54:14.499890 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 30 13:54:14.499943 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 13:54:14.499993 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 30 13:54:14.500041 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 30 13:54:14.500092 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 30 13:54:14.500101 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 13:54:14.500107 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:54:14.500113 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Jan 30 13:54:14.500119 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 30 13:54:14.500124 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 13:54:14.500132 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 13:54:14.500137 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 13:54:14.500143 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 30 13:54:14.500193 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 13:54:14.500202 kernel: Initialise system trusted keyrings Jan 30 13:54:14.500207 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 13:54:14.500213 kernel: Key type asymmetric registered Jan 30 13:54:14.500219 kernel: Asymmetric key parser 'x509' registered Jan 30 13:54:14.500226 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:54:14.500232 kernel: io scheduler mq-deadline registered Jan 30 13:54:14.500238 kernel: io scheduler kyber registered Jan 30 13:54:14.500243 kernel: io scheduler bfq registered Jan 30 13:54:14.500291 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 30 13:54:14.500340 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 30 13:54:14.500389 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 30 13:54:14.500453 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 30 13:54:14.500505 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 30 13:54:14.500553 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 30 13:54:14.500600 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 30 13:54:14.500652 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 13:54:14.500662 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 13:54:14.500668 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 13:54:14.500674 kernel: pstore: Using crash dump compression: deflate Jan 30 13:54:14.500680 kernel: pstore: Registered erst as persistent store backend Jan 30 13:54:14.500687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:54:14.500693 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:54:14.500698 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:54:14.500704 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:54:14.500751 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 13:54:14.500760 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:54:14.500803 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 13:54:14.500848 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 13:54:14.500895 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T13:54:13 UTC (1738245253) Jan 30 13:54:14.500938 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:54:14.500947 kernel: intel_pstate: Intel P-state driver initializing Jan 30 13:54:14.500952 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 13:54:14.500958 kernel: intel_pstate: HWP enabled Jan 30 13:54:14.500964 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:54:14.500969 kernel: Segment Routing with IPv6 Jan 30 13:54:14.500975 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:54:14.500981 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:54:14.500988 kernel: Key type dns_resolver registered Jan 30 13:54:14.500994 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 13:54:14.500999 kernel: IPI shorthand broadcast: enabled Jan 30 13:54:14.501005 kernel: sched_clock: Marking stable (2761052121, 1456325044)->(4688810292, -471433127) Jan 30 13:54:14.501011 kernel: registered taskstats version 1 Jan 30 13:54:14.501016 kernel: Loading compiled-in X.509 certificates Jan 30 13:54:14.501022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:54:14.501027 kernel: Key type .fscrypt registered Jan 30 13:54:14.501033 kernel: Key type fscrypt-provisioning registered Jan 30 13:54:14.501040 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:54:14.501045 kernel: ima: No architecture policies found Jan 30 13:54:14.501051 kernel: clk: Disabling unused clocks Jan 30 13:54:14.501057 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:54:14.501063 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:54:14.501068 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:54:14.501074 kernel: Run /init as init process Jan 30 13:54:14.501080 kernel: with arguments: Jan 30 13:54:14.501086 kernel: /init Jan 30 13:54:14.501092 kernel: with environment: Jan 30 13:54:14.501097 kernel: HOME=/ Jan 30 13:54:14.501103 kernel: TERM=linux Jan 30 13:54:14.501108 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:54:14.501115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:54:14.501122 systemd[1]: Detected architecture x86-64. Jan 30 13:54:14.501129 systemd[1]: Running in initrd. Jan 30 13:54:14.501136 systemd[1]: No hostname configured, using default hostname. Jan 30 13:54:14.501141 systemd[1]: Hostname set to . Jan 30 13:54:14.501147 systemd[1]: Initializing machine ID from random generator. Jan 30 13:54:14.501153 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:54:14.501159 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:14.501165 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:14.501172 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:54:14.501178 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:54:14.501185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:54:14.501191 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:54:14.501197 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:54:14.501204 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:54:14.501210 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:14.501216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:14.501223 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:54:14.501229 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:54:14.501235 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:54:14.501240 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:54:14.501247 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:14.501252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:14.501258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:54:14.501264 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:54:14.501270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:14.501277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:14.501283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:14.501289 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:54:14.501295 kernel: tsc: Refined TSC clocksource calibration: 3407.986 MHz Jan 30 13:54:14.501301 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc6d7268, max_idle_ns: 440795260133 ns Jan 30 13:54:14.501307 kernel: clocksource: Switched to clocksource tsc Jan 30 13:54:14.501312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:54:14.501318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:54:14.501324 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:54:14.501331 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:54:14.501337 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:54:14.501354 systemd-journald[269]: Collecting audit messages is disabled. Jan 30 13:54:14.501368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:54:14.501376 systemd-journald[269]: Journal started Jan 30 13:54:14.501389 systemd-journald[269]: Runtime Journal (/run/log/journal/2f8984513cab4a7186a223b458419b42) is 8.0M, max 639.1M, 631.1M free. Jan 30 13:54:14.504011 systemd-modules-load[271]: Inserted module 'overlay' Jan 30 13:54:14.520522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:14.543428 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:54:14.543450 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:54:14.550877 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:14.550968 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:14.551052 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:54:14.552001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:54:14.556572 systemd-modules-load[271]: Inserted module 'br_netfilter' Jan 30 13:54:14.556990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:54:14.576527 kernel: Bridge firewalling registered Jan 30 13:54:14.576597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:14.658378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:14.686800 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:54:14.709860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:14.750683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:14.762064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:54:14.762569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:54:14.768397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:14.768542 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:14.769925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:54:14.779735 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:14.799084 systemd-resolved[304]: Positive Trust Anchors: Jan 30 13:54:14.799095 systemd-resolved[304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:54:14.799140 systemd-resolved[304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:54:14.799654 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:54:14.801918 systemd-resolved[304]: Defaulting to hostname 'linux'. Jan 30 13:54:14.811657 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:54:14.926642 dracut-cmdline[313]: dracut-dracut-053 Jan 30 13:54:14.926642 dracut-cmdline[313]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:54:14.818713 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:15.002490 kernel: SCSI subsystem initialized Jan 30 13:54:15.014473 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:54:15.026428 kernel: iscsi: registered transport (tcp) Jan 30 13:54:15.047184 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:54:15.047201 kernel: QLogic iSCSI HBA Driver Jan 30 13:54:15.070202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:15.096714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:54:15.137252 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:54:15.137270 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:54:15.146067 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:54:15.181485 kernel: raid6: avx2x4 gen() 46832 MB/s Jan 30 13:54:15.202462 kernel: raid6: avx2x2 gen() 53704 MB/s Jan 30 13:54:15.228501 kernel: raid6: avx2x1 gen() 45024 MB/s Jan 30 13:54:15.228518 kernel: raid6: using algorithm avx2x2 gen() 53704 MB/s Jan 30 13:54:15.255588 kernel: raid6: .... xor() 32731 MB/s, rmw enabled Jan 30 13:54:15.255605 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:54:15.276462 kernel: xor: automatically using best checksumming function avx Jan 30 13:54:15.373470 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:54:15.378940 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:15.399767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:15.435889 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jan 30 13:54:15.438332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:15.475665 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:54:15.495656 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Jan 30 13:54:15.540581 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:15.566873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:15.651610 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:15.675890 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:54:15.675911 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:54:15.676427 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:54:15.685724 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:54:15.733537 kernel: libata version 3.00 loaded. Jan 30 13:54:15.733553 kernel: PTP clock support registered Jan 30 13:54:15.733560 kernel: ACPI: bus type USB registered Jan 30 13:54:15.733574 kernel: usbcore: registered new interface driver usbfs Jan 30 13:54:15.733590 kernel: usbcore: registered new interface driver hub Jan 30 13:54:15.733598 kernel: usbcore: registered new device driver usb Jan 30 13:54:15.733607 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:54:15.733615 kernel: AES CTR mode by8 optimization enabled Jan 30 13:54:15.733622 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 13:54:15.872169 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:54:15.872348 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 30 13:54:15.872629 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 13:54:15.872848 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 13:54:15.873054 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 13:54:15.873350 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:54:15.873717 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 13:54:15.873930 kernel: scsi host0: ahci Jan 30 13:54:15.874120 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 13:54:15.874319 kernel: scsi host1: ahci Jan 30 13:54:15.874584 kernel: hub 1-0:1.0: USB hub found Jan 30 13:54:15.874829 kernel: scsi host2: ahci Jan 30 13:54:15.875024 kernel: hub 1-0:1.0: 16 ports detected Jan 30 13:54:15.875222 kernel: scsi host3: ahci Jan 30 13:54:15.875419 kernel: hub 2-0:1.0: USB hub found Jan 30 13:54:15.875717 kernel: scsi host4: ahci Jan 30 13:54:15.875908 kernel: hub 2-0:1.0: 10 ports detected Jan 30 13:54:15.876161 kernel: scsi host5: ahci Jan 30 13:54:15.876364 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 13:54:15.876383 kernel: scsi host6: ahci Jan 30 13:54:15.876641 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 13:54:15.876669 kernel: scsi host7: ahci Jan 30 13:54:15.876845 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Jan 30 13:54:15.876879 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Jan 30 13:54:15.876896 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Jan 30 13:54:15.876919 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Jan 30 13:54:15.876936 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Jan 30 13:54:15.876950 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Jan 30 13:54:15.876964 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Jan 30 13:54:15.876977 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Jan 30 13:54:15.735744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:15.984842 kernel: pps pps0: new PPS source ptp0 Jan 30 13:54:15.984929 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 30 13:54:15.985005 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:54:15.985075 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1b:6e Jan 30 13:54:15.985146 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 30 13:54:15.985218 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:54:15.985289 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Jan 30 13:54:16.452113 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:54:16.452197 kernel: pps pps1: new PPS source ptp1 Jan 30 13:54:16.452267 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 30 13:54:16.452337 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:54:16.452403 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1b:6f Jan 30 13:54:16.452476 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 30 13:54:16.452541 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:54:16.452608 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 13:54:16.618252 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618264 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618272 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:54:16.618279 kernel: hub 1-14:1.0: USB hub found Jan 30 13:54:16.618367 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:54:16.618376 kernel: hub 1-14:1.0: 4 ports detected Jan 30 13:54:16.618457 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618466 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618473 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618480 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:54:16.618553 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618561 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 30 13:54:16.618626 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:54:16.618634 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:54:16.618641 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:54:16.618651 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:54:16.618659 kernel: ata1.00: Features: NCQ-prio Jan 30 13:54:16.618666 kernel: ata2.00: Features: NCQ-prio Jan 30 13:54:16.618673 kernel: ata1.00: configured for UDMA/133 Jan 30 13:54:16.618681 kernel: ata2.00: configured for UDMA/133 Jan 30 13:54:16.618688 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:54:16.618756 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:54:16.618820 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 30 13:54:16.618890 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:16.618899 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:54:16.618961 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:54:16.618969 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:54:16.619029 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:54:16.619088 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:54:16.619147 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 30 13:54:16.619215 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 13:54:16.619275 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 13:54:16.619334 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 13:54:16.619391 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:54:16.619454 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 13:54:16.619513 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 13:54:16.619571 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:54:16.619630 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:16.619641 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 13:54:16.619699 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:54:16.619763 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:54:16.619771 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Jan 30 13:54:17.000294 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 13:54:17.000763 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:54:17.001184 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:54:17.001250 kernel: GPT:9289727 != 937703087 Jan 30 13:54:17.001290 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:54:17.001326 kernel: GPT:9289727 != 937703087 Jan 30 13:54:17.001361 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:54:17.001397 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 13:54:17.001984 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:17.002030 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:54:17.002387 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (543) Jan 30 13:54:17.002462 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (552) Jan 30 13:54:17.002503 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:54:17.002540 kernel: usbcore: registered new interface driver usbhid Jan 30 13:54:17.002576 kernel: usbhid: USB HID core driver Jan 30 13:54:17.002611 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 13:54:17.002649 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:17.002685 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:17.002720 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 13:54:17.003098 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 13:54:17.003153 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 13:54:17.003669 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:54:17.004215 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 30 13:54:17.004639 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:54:17.004986 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 30 13:54:15.735850 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:15.985216 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:17.041553 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 30 13:54:15.999951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:16.000051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:16.042533 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:16.067601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:16.077714 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:17.091591 disk-uuid[710]: Primary Header is updated. Jan 30 13:54:17.091591 disk-uuid[710]: Secondary Entries is updated. Jan 30 13:54:17.091591 disk-uuid[710]: Secondary Header is updated. Jan 30 13:54:16.078173 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:16.078217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:16.078244 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:16.078685 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:54:16.129600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:16.140651 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:16.159599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:16.168657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:16.552038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 30 13:54:16.581284 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 30 13:54:16.598822 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:54:16.609523 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:54:16.624289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 13:54:16.667584 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:54:17.692650 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:17.700407 disk-uuid[711]: The operation has completed successfully. Jan 30 13:54:17.708532 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:17.737563 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:54:17.737614 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:54:17.786637 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:54:17.814566 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:54:17.814581 sh[741]: Success Jan 30 13:54:17.848539 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:54:17.869461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:54:17.879670 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:54:17.920916 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:54:17.921056 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:17.931663 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:54:17.938661 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:54:17.944568 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:54:17.959455 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:54:17.961517 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:54:17.969850 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:54:17.982691 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:54:18.005005 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:54:18.075520 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:18.075534 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:18.075542 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:18.075549 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:18.075556 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:54:18.075563 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:18.065890 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:54:18.078227 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:54:18.093722 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:18.127718 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:54:18.138325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:18.178367 systemd-networkd[925]: lo: Link UP Jan 30 13:54:18.178370 systemd-networkd[925]: lo: Gained carrier Jan 30 13:54:18.180942 systemd-networkd[925]: Enumeration completed Jan 30 13:54:18.195378 ignition[923]: Ignition 2.20.0 Jan 30 13:54:18.181024 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:18.195382 ignition[923]: Stage: fetch-offline Jan 30 13:54:18.181746 systemd-networkd[925]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.195403 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:18.185623 systemd[1]: Reached target network.target - Network. Jan 30 13:54:18.195408 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:18.197591 unknown[923]: fetched base config from "system" Jan 30 13:54:18.195463 ignition[923]: parsed url from cmdline: "" Jan 30 13:54:18.197595 unknown[923]: fetched user config from "system" Jan 30 13:54:18.195465 ignition[923]: no config URL provided Jan 30 13:54:18.209725 systemd-networkd[925]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.195467 ignition[923]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:54:18.214664 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:18.195489 ignition[923]: parsing config with SHA512: 4288d7dba6db387275eb3dd08f03c9fe77f76efef20b2b5d8acc66526758e403b9c5231a1368c8bd5165a24dac17c42c370a3963fd78cb0bca937cbc9e50baf7 Jan 30 13:54:18.238061 systemd-networkd[925]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.197793 ignition[923]: fetch-offline: fetch-offline passed Jan 30 13:54:18.239936 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:54:18.197795 ignition[923]: POST message to Packet Timeline Jan 30 13:54:18.252623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:54:18.197798 ignition[923]: POST Status error: resource requires networking Jan 30 13:54:18.197837 ignition[923]: Ignition finished successfully Jan 30 13:54:18.261725 ignition[937]: Ignition 2.20.0 Jan 30 13:54:18.261732 ignition[937]: Stage: kargs Jan 30 13:54:18.261885 ignition[937]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:18.446529 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 30 13:54:18.439390 systemd-networkd[925]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.261894 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:18.262657 ignition[937]: kargs: kargs passed Jan 30 13:54:18.262661 ignition[937]: POST message to Packet Timeline Jan 30 13:54:18.262678 ignition[937]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:18.263234 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47313->[::1]:53: read: connection refused Jan 30 13:54:18.463947 ignition[937]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 13:54:18.464660 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:32878->[::1]:53: read: connection refused Jan 30 13:54:18.658502 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 30 13:54:18.659551 systemd-networkd[925]: eno1: Link UP Jan 30 13:54:18.659794 systemd-networkd[925]: eno2: Link UP Jan 30 13:54:18.659910 systemd-networkd[925]: enp2s0f0np0: Link UP Jan 30 13:54:18.660045 systemd-networkd[925]: enp2s0f0np0: Gained carrier Jan 30 13:54:18.669614 systemd-networkd[925]: enp2s0f1np1: Link UP Jan 30 13:54:18.702628 systemd-networkd[925]: enp2s0f0np0: DHCPv4 address 147.75.90.195/31, gateway 147.75.90.194 acquired from 145.40.83.140 Jan 30 13:54:18.865600 ignition[937]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 13:54:18.866748 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45984->[::1]:53: read: connection refused Jan 30 13:54:19.467092 systemd-networkd[925]: enp2s0f1np1: Gained carrier Jan 30 13:54:19.667933 ignition[937]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 13:54:19.669156 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48396->[::1]:53: read: connection refused Jan 30 13:54:20.362916 systemd-networkd[925]: enp2s0f0np0: Gained IPv6LL Jan 30 13:54:20.618957 systemd-networkd[925]: enp2s0f1np1: Gained IPv6LL Jan 30 13:54:21.270706 ignition[937]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 13:54:21.271905 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39376->[::1]:53: read: connection refused Jan 30 13:54:24.475407 ignition[937]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 13:54:25.224796 ignition[937]: GET result: OK Jan 30 13:54:25.611021 ignition[937]: Ignition finished successfully Jan 30 13:54:25.612972 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:54:25.639712 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:54:25.645963 ignition[957]: Ignition 2.20.0 Jan 30 13:54:25.645967 ignition[957]: Stage: disks Jan 30 13:54:25.646071 ignition[957]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:25.646077 ignition[957]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:25.646572 ignition[957]: disks: disks passed Jan 30 13:54:25.646574 ignition[957]: POST message to Packet Timeline Jan 30 13:54:25.646586 ignition[957]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:26.232064 ignition[957]: GET result: OK Jan 30 13:54:26.632527 ignition[957]: Ignition finished successfully Jan 30 13:54:26.634264 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:54:26.652642 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:26.671688 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:54:26.693747 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:26.712736 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:26.729746 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:26.763696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:54:26.799911 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:54:26.809867 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:54:26.838597 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:54:26.909480 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:54:26.909909 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:54:26.917924 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:54:26.951574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:26.998469 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (983) Jan 30 13:54:26.998483 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:26.998492 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:26.998503 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:26.960342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:54:27.028513 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:27.028531 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:54:26.999096 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:54:27.029022 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 13:54:27.051580 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:54:27.099659 coreos-metadata[1000]: Jan 30 13:54:27.097 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:54:27.051598 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:27.082496 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:27.146493 coreos-metadata[1001]: Jan 30 13:54:27.097 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:54:27.107671 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:54:27.139709 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:54:27.186555 initrd-setup-root[1015]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:54:27.197540 initrd-setup-root[1022]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:54:27.207543 initrd-setup-root[1029]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:54:27.218522 initrd-setup-root[1036]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:54:27.225176 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:27.255696 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:54:27.282660 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:27.272226 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:54:27.291169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:54:27.314834 ignition[1103]: INFO : Ignition 2.20.0 Jan 30 13:54:27.314834 ignition[1103]: INFO : Stage: mount Jan 30 13:54:27.321524 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:27.321524 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:27.321524 ignition[1103]: INFO : mount: mount passed Jan 30 13:54:27.321524 ignition[1103]: INFO : POST message to Packet Timeline Jan 30 13:54:27.321524 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:27.320371 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:54:27.392663 coreos-metadata[1001]: Jan 30 13:54:27.358 INFO Fetch successful Jan 30 13:54:27.427751 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 13:54:27.427812 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 13:54:27.746134 coreos-metadata[1000]: Jan 30 13:54:27.745 INFO Fetch successful Jan 30 13:54:27.772741 coreos-metadata[1000]: Jan 30 13:54:27.772 INFO wrote hostname ci-4186.1.0-a-fe6ab79c24 to /sysroot/etc/hostname Jan 30 13:54:27.774098 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:54:27.821876 ignition[1103]: INFO : GET result: OK Jan 30 13:54:28.135350 ignition[1103]: INFO : Ignition finished successfully Jan 30 13:54:28.138373 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:54:28.170641 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:54:28.182408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:28.227205 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1127) Jan 30 13:54:28.227224 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:28.235370 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:28.241255 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:28.256233 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:28.256249 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:54:28.258158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:28.284054 ignition[1144]: INFO : Ignition 2.20.0 Jan 30 13:54:28.284054 ignition[1144]: INFO : Stage: files Jan 30 13:54:28.299664 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:28.299664 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:28.299664 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:54:28.299664 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:54:28.288099 unknown[1144]: wrote ssh authorized keys file for user: core Jan 30 13:54:28.429646 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:54:28.460276 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:54:28.460276 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:54:28.973344 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:54:29.187828 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:29.187828 ignition[1144]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: files passed Jan 30 13:54:29.218631 ignition[1144]: INFO : POST message to Packet Timeline Jan 30 13:54:29.218631 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:29.787912 ignition[1144]: INFO : GET result: OK Jan 30 13:54:30.176938 ignition[1144]: INFO : Ignition finished successfully Jan 30 13:54:30.179532 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:54:30.213711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:54:30.224036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:54:30.234864 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:54:30.234926 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:54:30.294232 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:30.294232 initrd-setup-root-after-ignition[1184]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:30.332766 initrd-setup-root-after-ignition[1188]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:30.298714 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:30.309755 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:54:30.356842 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:54:30.458250 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:54:30.458538 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:54:30.479939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:54:30.499802 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:54:30.519913 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:54:30.533844 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:54:30.606886 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:30.632813 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:54:30.652215 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:30.666761 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:30.687778 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:54:30.705812 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:54:30.705968 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:30.734189 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:54:30.756124 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:54:30.774126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:54:30.793117 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:30.814103 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:30.835121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:54:30.855113 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:30.877159 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:54:30.898145 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:54:30.918123 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:54:30.935996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:54:30.936397 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:30.962234 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:30.982142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:31.002974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:54:31.003333 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:31.026017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:54:31.026420 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:31.058132 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:54:31.058631 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:31.078315 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:54:31.095983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:54:31.096390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:31.117124 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:54:31.135128 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:54:31.153091 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:54:31.153393 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:31.173139 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:54:31.173468 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:31.196235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:54:31.196676 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:31.216191 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:54:31.216607 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:54:31.353584 ignition[1208]: INFO : Ignition 2.20.0 Jan 30 13:54:31.353584 ignition[1208]: INFO : Stage: umount Jan 30 13:54:31.353584 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:31.353584 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:31.353584 ignition[1208]: INFO : umount: umount passed Jan 30 13:54:31.353584 ignition[1208]: INFO : POST message to Packet Timeline Jan 30 13:54:31.353584 ignition[1208]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:31.234202 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:54:31.234626 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:54:31.263706 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:54:31.269677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:54:31.269761 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:31.316664 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:54:31.318777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:54:31.318918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:31.345676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:54:31.345747 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:31.382892 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:54:31.385344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:54:31.385468 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:54:31.494522 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:54:31.494643 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:54:31.909500 ignition[1208]: INFO : GET result: OK Jan 30 13:54:32.244656 ignition[1208]: INFO : Ignition finished successfully Jan 30 13:54:32.247521 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:54:32.247821 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:54:32.266849 systemd[1]: Stopped target network.target - Network. Jan 30 13:54:32.281684 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:54:32.281868 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:54:32.301825 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:54:32.301995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:54:32.320860 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:54:32.321018 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:54:32.339842 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:54:32.340005 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:32.358828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:54:32.358997 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:32.378353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:54:32.395593 systemd-networkd[925]: enp2s0f0np0: DHCPv6 lease lost Jan 30 13:54:32.397886 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:54:32.407634 systemd-networkd[925]: enp2s0f1np1: DHCPv6 lease lost Jan 30 13:54:32.416459 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:54:32.416730 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:54:32.435866 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:54:32.436199 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:54:32.456246 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:54:32.456373 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:32.488606 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:54:32.514604 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:54:32.514683 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:32.533772 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:54:32.533875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:32.553838 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:54:32.554003 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:32.572818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:54:32.572983 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:32.593053 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:32.614804 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:54:32.615185 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:32.648650 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:54:32.648797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:32.654926 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:54:32.655035 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:32.682703 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:54:32.682936 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:32.714026 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:54:32.714184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:32.753628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:32.753899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:32.800777 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:54:32.805868 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:54:32.806028 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:32.836787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:33.059589 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Jan 30 13:54:32.836928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:32.856752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:54:32.857067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:54:32.939807 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:54:32.939875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:54:32.948982 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:54:32.991604 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:54:33.013790 systemd[1]: Switching root. Jan 30 13:54:33.132581 systemd-journald[269]: Journal stopped Jan 30 13:54:14.487181 kernel: microcode: updated early: 0xde -> 0x100, date = 2024-02-05 Jan 30 13:54:14.487196 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:54:14.487203 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:54:14.487208 kernel: BIOS-provided physical RAM map: Jan 30 13:54:14.487212 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 13:54:14.487216 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 13:54:14.487221 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 13:54:14.487225 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 13:54:14.487230 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 13:54:14.487234 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfb1fff] usable Jan 30 13:54:14.487238 kernel: BIOS-e820: [mem 0x000000006dfb2000-0x000000006dfb2fff] ACPI NVS Jan 30 13:54:14.487242 kernel: BIOS-e820: [mem 0x000000006dfb3000-0x000000006dfb3fff] reserved Jan 30 13:54:14.487247 kernel: BIOS-e820: [mem 0x000000006dfb4000-0x0000000077fc4fff] usable Jan 30 13:54:14.487251 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Jan 30 13:54:14.487256 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Jan 30 13:54:14.487262 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Jan 30 13:54:14.487266 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Jan 30 13:54:14.487271 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Jan 30 13:54:14.487276 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Jan 30 13:54:14.487280 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:54:14.487285 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 13:54:14.487290 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 13:54:14.487294 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:54:14.487299 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 13:54:14.487303 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Jan 30 13:54:14.487309 kernel: NX (Execute Disable) protection: active Jan 30 13:54:14.487314 kernel: APIC: Static calls initialized Jan 30 13:54:14.487318 kernel: SMBIOS 3.2.1 present. Jan 30 13:54:14.487323 kernel: DMI: Supermicro X11SCH-F/X11SCH-F, BIOS 1.5 11/17/2020 Jan 30 13:54:14.487328 kernel: tsc: Detected 3400.000 MHz processor Jan 30 13:54:14.487332 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 13:54:14.487337 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:54:14.487342 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:54:14.487347 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Jan 30 13:54:14.487352 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 13:54:14.487358 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:54:14.487363 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Jan 30 13:54:14.487367 kernel: Using GB pages for direct mapping Jan 30 13:54:14.487372 kernel: ACPI: Early table checksum verification disabled Jan 30 13:54:14.487377 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 13:54:14.487384 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 13:54:14.487389 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Jan 30 13:54:14.487395 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 13:54:14.487400 kernel: ACPI: FACS 0x0000000079662F80 000040 Jan 30 13:54:14.487405 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Jan 30 13:54:14.487410 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Jan 30 13:54:14.487415 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 13:54:14.487420 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 13:54:14.487428 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 13:54:14.487452 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 13:54:14.487457 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 13:54:14.487462 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 13:54:14.487467 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487486 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 13:54:14.487491 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 13:54:14.487496 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487501 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487506 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 13:54:14.487512 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 13:54:14.487517 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487522 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:54:14.487527 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 13:54:14.487532 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:54:14.487537 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 13:54:14.487542 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 13:54:14.487547 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 13:54:14.487555 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xf4m 01072009 AMI 00010013) Jan 30 13:54:14.487561 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 13:54:14.487566 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 13:54:14.487571 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 13:54:14.487576 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 13:54:14.487581 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 13:54:14.487586 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Jan 30 13:54:14.487591 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Jan 30 13:54:14.487596 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Jan 30 13:54:14.487602 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Jan 30 13:54:14.487607 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Jan 30 13:54:14.487612 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Jan 30 13:54:14.487617 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Jan 30 13:54:14.487622 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Jan 30 13:54:14.487627 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Jan 30 13:54:14.487631 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Jan 30 13:54:14.487637 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Jan 30 13:54:14.487641 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Jan 30 13:54:14.487647 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Jan 30 13:54:14.487652 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Jan 30 13:54:14.487657 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Jan 30 13:54:14.487662 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Jan 30 13:54:14.487667 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Jan 30 13:54:14.487672 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Jan 30 13:54:14.487677 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Jan 30 13:54:14.487682 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Jan 30 13:54:14.487687 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Jan 30 13:54:14.487692 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Jan 30 13:54:14.487698 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Jan 30 13:54:14.487703 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Jan 30 13:54:14.487708 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Jan 30 13:54:14.487713 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Jan 30 13:54:14.487718 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Jan 30 13:54:14.487723 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Jan 30 13:54:14.487728 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Jan 30 13:54:14.487733 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Jan 30 13:54:14.487737 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Jan 30 13:54:14.487743 kernel: No NUMA configuration found Jan 30 13:54:14.487748 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Jan 30 13:54:14.487753 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Jan 30 13:54:14.487758 kernel: Zone ranges: Jan 30 13:54:14.487763 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:54:14.487768 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:54:14.487773 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Jan 30 13:54:14.487778 kernel: Movable zone start for each node Jan 30 13:54:14.487783 kernel: Early memory node ranges Jan 30 13:54:14.487789 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 13:54:14.487794 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 13:54:14.487799 kernel: node 0: [mem 0x0000000040400000-0x000000006dfb1fff] Jan 30 13:54:14.487804 kernel: node 0: [mem 0x000000006dfb4000-0x0000000077fc4fff] Jan 30 13:54:14.487809 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Jan 30 13:54:14.487815 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Jan 30 13:54:14.487823 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Jan 30 13:54:14.487829 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Jan 30 13:54:14.487835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:54:14.487840 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 13:54:14.487846 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 13:54:14.487852 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 13:54:14.487857 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 30 13:54:14.487862 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 30 13:54:14.487868 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Jan 30 13:54:14.487873 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Jan 30 13:54:14.487878 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 13:54:14.487885 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:54:14.487890 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:54:14.487896 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:54:14.487901 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:54:14.487906 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:54:14.487911 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:54:14.487917 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:54:14.487922 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:54:14.487927 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:54:14.487933 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:54:14.487939 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:54:14.487944 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:54:14.487949 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:54:14.487954 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:54:14.487960 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:54:14.487965 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:54:14.487970 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 13:54:14.487975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:54:14.487981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:54:14.487987 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:54:14.487992 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:54:14.487997 kernel: TSC deadline timer available Jan 30 13:54:14.488003 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 13:54:14.488008 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Jan 30 13:54:14.488013 kernel: Booting paravirtualized kernel on bare hardware Jan 30 13:54:14.488019 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:54:14.488024 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 13:54:14.488031 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:54:14.488036 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:54:14.488041 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 13:54:14.488047 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:54:14.488053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:54:14.488058 kernel: random: crng init done Jan 30 13:54:14.488063 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 13:54:14.488068 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 13:54:14.488075 kernel: Fallback order for Node 0: 0 Jan 30 13:54:14.488080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Jan 30 13:54:14.488086 kernel: Policy zone: Normal Jan 30 13:54:14.488091 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:54:14.488096 kernel: software IO TLB: area num 16. Jan 30 13:54:14.488102 kernel: Memory: 32677260K/33411988K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734468K reserved, 0K cma-reserved) Jan 30 13:54:14.488107 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 13:54:14.488112 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:54:14.488118 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:54:14.488124 kernel: Dynamic Preempt: voluntary Jan 30 13:54:14.488130 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:54:14.488135 kernel: rcu: RCU event tracing is enabled. Jan 30 13:54:14.488140 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 13:54:14.488146 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:54:14.488151 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:54:14.488156 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:54:14.488162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:54:14.488167 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 13:54:14.488172 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 13:54:14.488179 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:54:14.488184 kernel: Console: colour VGA+ 80x25 Jan 30 13:54:14.488190 kernel: printk: console [tty0] enabled Jan 30 13:54:14.488195 kernel: printk: console [ttyS1] enabled Jan 30 13:54:14.488200 kernel: ACPI: Core revision 20230628 Jan 30 13:54:14.488206 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 30 13:54:14.488211 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:54:14.488216 kernel: DMAR: Host address width 39 Jan 30 13:54:14.488222 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 30 13:54:14.488228 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 30 13:54:14.488233 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 13:54:14.488239 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 13:54:14.488244 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Jan 30 13:54:14.488249 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Jan 30 13:54:14.488255 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 30 13:54:14.488260 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 13:54:14.488266 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 13:54:14.488271 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 13:54:14.488277 kernel: x2apic enabled Jan 30 13:54:14.488283 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 13:54:14.488288 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:54:14.488293 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 13:54:14.488299 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 13:54:14.488304 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 13:54:14.488310 kernel: process: using mwait in idle threads Jan 30 13:54:14.488315 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:54:14.488320 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:54:14.488327 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:54:14.488332 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:54:14.488338 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:54:14.488343 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:54:14.488348 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:54:14.488354 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:54:14.488359 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:54:14.488364 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:54:14.488370 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:54:14.488376 kernel: TAA: Mitigation: TSX disabled Jan 30 13:54:14.488381 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 13:54:14.488387 kernel: SRBDS: Mitigation: Microcode Jan 30 13:54:14.488392 kernel: GDS: Mitigation: Microcode Jan 30 13:54:14.488397 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:54:14.488403 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:54:14.488408 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:54:14.488413 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:54:14.488419 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:54:14.488432 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:54:14.488437 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:54:14.488443 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:54:14.488448 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 13:54:14.488453 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:54:14.488459 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:54:14.488464 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:54:14.488469 kernel: landlock: Up and running. Jan 30 13:54:14.488475 kernel: SELinux: Initializing. Jan 30 13:54:14.488481 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.488486 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.488492 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:54:14.488497 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:54:14.488503 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:54:14.488508 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:54:14.488514 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 13:54:14.488519 kernel: ... version: 4 Jan 30 13:54:14.488525 kernel: ... bit width: 48 Jan 30 13:54:14.488531 kernel: ... generic registers: 4 Jan 30 13:54:14.488536 kernel: ... value mask: 0000ffffffffffff Jan 30 13:54:14.488541 kernel: ... max period: 00007fffffffffff Jan 30 13:54:14.488546 kernel: ... fixed-purpose events: 3 Jan 30 13:54:14.488552 kernel: ... event mask: 000000070000000f Jan 30 13:54:14.488557 kernel: signal: max sigframe size: 2032 Jan 30 13:54:14.488562 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 13:54:14.488568 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:54:14.488574 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:54:14.488579 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 13:54:14.488585 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:54:14.488590 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:54:14.488596 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 13:54:14.488601 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:54:14.488607 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 13:54:14.488612 kernel: smpboot: Max logical packages: 1 Jan 30 13:54:14.488617 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 13:54:14.488624 kernel: devtmpfs: initialized Jan 30 13:54:14.488629 kernel: x86/mm: Memory block size: 128MB Jan 30 13:54:14.488634 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfb2000-0x6dfb2fff] (4096 bytes) Jan 30 13:54:14.488640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Jan 30 13:54:14.488645 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:54:14.488651 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 13:54:14.488656 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:54:14.488661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:54:14.488667 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:54:14.488673 kernel: audit: type=2000 audit(1738245249.131:1): state=initialized audit_enabled=0 res=1 Jan 30 13:54:14.488678 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:54:14.488684 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:54:14.488689 kernel: cpuidle: using governor menu Jan 30 13:54:14.488694 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:54:14.488700 kernel: dca service started, version 1.12.1 Jan 30 13:54:14.488705 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:54:14.488710 kernel: PCI: Using configuration type 1 for base access Jan 30 13:54:14.488715 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 13:54:14.488722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:54:14.488727 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:54:14.488732 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:54:14.488738 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:54:14.488743 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:54:14.488748 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:54:14.488754 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:54:14.488759 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:54:14.488764 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:54:14.488771 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 13:54:14.488776 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488782 kernel: ACPI: SSDT 0xFFFF9879011A0800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 13:54:14.488787 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488792 kernel: ACPI: SSDT 0xFFFF987901198800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 13:54:14.488798 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488803 kernel: ACPI: SSDT 0xFFFF987901187C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 13:54:14.488808 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488814 kernel: ACPI: SSDT 0xFFFF987901199000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 13:54:14.488820 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488825 kernel: ACPI: SSDT 0xFFFF9879011AD000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 13:54:14.488831 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:54:14.488836 kernel: ACPI: SSDT 0xFFFF98790226B800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 13:54:14.488841 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 13:54:14.488847 kernel: ACPI: Interpreter enabled Jan 30 13:54:14.488852 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:54:14.488857 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:54:14.488863 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 13:54:14.488868 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 13:54:14.488874 kernel: HEST: Table parsing has been initialized. Jan 30 13:54:14.488880 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 13:54:14.488885 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:54:14.488890 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:54:14.488896 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 13:54:14.488901 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 13:54:14.488907 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 13:54:14.488912 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 13:54:14.488917 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 13:54:14.488924 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 13:54:14.488929 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 30 13:54:14.488934 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 13:54:14.488940 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 13:54:14.488945 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 13:54:14.488951 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 13:54:14.488956 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 13:54:14.488961 kernel: ACPI: \PIN_: New power resource Jan 30 13:54:14.488967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 13:54:14.489041 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:54:14.489094 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 13:54:14.489141 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 13:54:14.489149 kernel: PCI host bridge to bus 0000:00 Jan 30 13:54:14.489200 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:54:14.489243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:54:14.489289 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:54:14.489331 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Jan 30 13:54:14.489373 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 13:54:14.489414 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 13:54:14.489474 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 13:54:14.489529 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 13:54:14.489582 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.489636 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 30 13:54:14.489686 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.489737 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 30 13:54:14.489785 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Jan 30 13:54:14.489833 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 30 13:54:14.489881 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 30 13:54:14.489937 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 13:54:14.489986 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Jan 30 13:54:14.490037 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 13:54:14.490085 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Jan 30 13:54:14.490138 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 13:54:14.490188 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Jan 30 13:54:14.490244 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 13:54:14.490295 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 13:54:14.490346 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Jan 30 13:54:14.490393 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Jan 30 13:54:14.490448 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 13:54:14.490496 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:54:14.490551 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 13:54:14.490599 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:54:14.490654 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 13:54:14.490701 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Jan 30 13:54:14.490748 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 13:54:14.490799 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 13:54:14.490847 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Jan 30 13:54:14.490898 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 13:54:14.490951 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 13:54:14.490999 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Jan 30 13:54:14.491046 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 13:54:14.491100 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 13:54:14.491148 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Jan 30 13:54:14.491195 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Jan 30 13:54:14.491242 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 30 13:54:14.491290 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 30 13:54:14.491336 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 30 13:54:14.491384 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Jan 30 13:54:14.491437 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 13:54:14.491491 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 13:54:14.491539 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491593 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 13:54:14.491641 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491696 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 13:54:14.491748 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491800 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 13:54:14.491850 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.491902 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 30 13:54:14.491951 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.492005 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 13:54:14.492056 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:54:14.492108 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 13:54:14.492161 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 13:54:14.492208 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Jan 30 13:54:14.492257 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 13:54:14.492308 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 13:54:14.492359 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 13:54:14.492407 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:54:14.492466 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 13:54:14.492517 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 13:54:14.492566 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Jan 30 13:54:14.492615 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 30 13:54:14.492664 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:54:14.492714 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:54:14.492769 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 13:54:14.492820 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 13:54:14.492868 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Jan 30 13:54:14.492917 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 30 13:54:14.492967 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:54:14.493015 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:54:14.493068 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 13:54:14.493116 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 30 13:54:14.493163 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:54:14.493211 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 13:54:14.493264 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 13:54:14.493313 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:54:14.493363 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Jan 30 13:54:14.493411 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 13:54:14.493499 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Jan 30 13:54:14.493548 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.493597 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 13:54:14.493645 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:54:14.493693 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 30 13:54:14.493751 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 30 13:54:14.493800 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:54:14.493852 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Jan 30 13:54:14.493900 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 13:54:14.493949 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Jan 30 13:54:14.493998 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:54:14.494046 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 13:54:14.494095 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:54:14.494142 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 30 13:54:14.494190 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 13:54:14.494245 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 13:54:14.494296 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 30 13:54:14.494345 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 13:54:14.494394 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:54:14.494446 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 13:54:14.494493 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.494542 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.494608 kernel: pci_bus 0000:08: extended config space not accessible Jan 30 13:54:14.494667 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 13:54:14.494719 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Jan 30 13:54:14.494770 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Jan 30 13:54:14.494823 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 13:54:14.494873 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:54:14.494925 kernel: pci 0000:08:00.0: supports D1 D2 Jan 30 13:54:14.494979 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:54:14.495030 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 13:54:14.495078 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.495130 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.495138 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 13:54:14.495144 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 13:54:14.495150 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 13:54:14.495156 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 13:54:14.495163 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 13:54:14.495169 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 13:54:14.495174 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 13:54:14.495180 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 13:54:14.495186 kernel: iommu: Default domain type: Translated Jan 30 13:54:14.495192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:54:14.495197 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:54:14.495203 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:54:14.495209 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 13:54:14.495215 kernel: e820: reserve RAM buffer [mem 0x6dfb2000-0x6fffffff] Jan 30 13:54:14.495221 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Jan 30 13:54:14.495226 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Jan 30 13:54:14.495232 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Jan 30 13:54:14.495237 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Jan 30 13:54:14.495288 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 30 13:54:14.495339 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 30 13:54:14.495390 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:54:14.495399 kernel: vgaarb: loaded Jan 30 13:54:14.495407 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:54:14.495413 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 30 13:54:14.495418 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:54:14.495458 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:54:14.495464 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:54:14.495470 kernel: pnp: PnP ACPI init Jan 30 13:54:14.495523 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 13:54:14.495571 kernel: pnp 00:02: [dma 0 disabled] Jan 30 13:54:14.495622 kernel: pnp 00:03: [dma 0 disabled] Jan 30 13:54:14.495671 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 13:54:14.495715 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 13:54:14.495764 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 13:54:14.495810 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 13:54:14.495855 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 13:54:14.495900 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 13:54:14.495944 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 13:54:14.495987 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 13:54:14.496033 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 13:54:14.496078 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 13:54:14.496122 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 13:54:14.496168 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 13:54:14.496215 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 13:54:14.496259 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 13:54:14.496301 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 13:54:14.496345 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 13:54:14.496387 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 13:54:14.496434 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 13:54:14.496519 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 13:54:14.496529 kernel: pnp: PnP ACPI: found 10 devices Jan 30 13:54:14.496535 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:54:14.496541 kernel: NET: Registered PF_INET protocol family Jan 30 13:54:14.496547 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:54:14.496553 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 13:54:14.496559 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:54:14.496565 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:54:14.496570 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:54:14.496576 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 13:54:14.496583 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.496589 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:54:14.496594 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:54:14.496600 kernel: NET: Registered PF_XDP protocol family Jan 30 13:54:14.496649 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Jan 30 13:54:14.496698 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Jan 30 13:54:14.496747 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Jan 30 13:54:14.496796 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:54:14.496849 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:54:14.496898 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:54:14.496949 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:54:14.496999 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:54:14.497050 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 13:54:14.497099 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 30 13:54:14.497148 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:54:14.497197 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 13:54:14.497244 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 13:54:14.497292 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:54:14.497340 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 30 13:54:14.497388 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 13:54:14.497439 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:54:14.497490 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 30 13:54:14.497538 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 13:54:14.497588 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 13:54:14.497637 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.497685 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.497733 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 13:54:14.497780 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 13:54:14.497828 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 30 13:54:14.497871 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 13:54:14.497917 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:54:14.497959 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:54:14.498002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:54:14.498044 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Jan 30 13:54:14.498085 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 13:54:14.498133 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Jan 30 13:54:14.498178 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:54:14.498228 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 30 13:54:14.498273 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Jan 30 13:54:14.498323 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 13:54:14.498368 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Jan 30 13:54:14.498415 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 13:54:14.498499 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Jan 30 13:54:14.498549 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 30 13:54:14.498594 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Jan 30 13:54:14.498602 kernel: PCI: CLS 64 bytes, default 64 Jan 30 13:54:14.498608 kernel: DMAR: No ATSR found Jan 30 13:54:14.498614 kernel: DMAR: No SATC found Jan 30 13:54:14.498620 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 30 13:54:14.498625 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 30 13:54:14.498631 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 30 13:54:14.498637 kernel: DMAR: IOMMU feature pasid inconsistent Jan 30 13:54:14.498644 kernel: DMAR: IOMMU feature eafs inconsistent Jan 30 13:54:14.498650 kernel: DMAR: IOMMU feature prs inconsistent Jan 30 13:54:14.498656 kernel: DMAR: IOMMU feature nest inconsistent Jan 30 13:54:14.498661 kernel: DMAR: IOMMU feature mts inconsistent Jan 30 13:54:14.498667 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 30 13:54:14.498673 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 30 13:54:14.498678 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 13:54:14.498684 kernel: DMAR: dmar1: Using Queued invalidation Jan 30 13:54:14.498732 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 30 13:54:14.498784 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 30 13:54:14.498834 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 30 13:54:14.498882 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 30 13:54:14.498930 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 30 13:54:14.498977 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 30 13:54:14.499025 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 30 13:54:14.499072 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 30 13:54:14.499119 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 30 13:54:14.499167 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 30 13:54:14.499215 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 30 13:54:14.499262 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 30 13:54:14.499310 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 30 13:54:14.499358 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 30 13:54:14.499405 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 30 13:54:14.499456 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 30 13:54:14.499504 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 30 13:54:14.499555 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 30 13:54:14.499602 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 30 13:54:14.499650 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 30 13:54:14.499697 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 30 13:54:14.499745 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 30 13:54:14.499792 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 30 13:54:14.499841 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 30 13:54:14.499890 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 30 13:54:14.499943 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 13:54:14.499993 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 30 13:54:14.500041 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 30 13:54:14.500092 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 30 13:54:14.500101 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 13:54:14.500107 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:54:14.500113 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Jan 30 13:54:14.500119 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 30 13:54:14.500124 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 13:54:14.500132 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 13:54:14.500137 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 13:54:14.500143 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 30 13:54:14.500193 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 13:54:14.500202 kernel: Initialise system trusted keyrings Jan 30 13:54:14.500207 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 13:54:14.500213 kernel: Key type asymmetric registered Jan 30 13:54:14.500219 kernel: Asymmetric key parser 'x509' registered Jan 30 13:54:14.500226 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:54:14.500232 kernel: io scheduler mq-deadline registered Jan 30 13:54:14.500238 kernel: io scheduler kyber registered Jan 30 13:54:14.500243 kernel: io scheduler bfq registered Jan 30 13:54:14.500291 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 30 13:54:14.500340 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 30 13:54:14.500389 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 30 13:54:14.500453 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 30 13:54:14.500505 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 30 13:54:14.500553 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 30 13:54:14.500600 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 30 13:54:14.500652 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 13:54:14.500662 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 13:54:14.500668 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 13:54:14.500674 kernel: pstore: Using crash dump compression: deflate Jan 30 13:54:14.500680 kernel: pstore: Registered erst as persistent store backend Jan 30 13:54:14.500687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:54:14.500693 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:54:14.500698 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:54:14.500704 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:54:14.500751 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 13:54:14.500760 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:54:14.500803 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 13:54:14.500848 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 13:54:14.500895 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T13:54:13 UTC (1738245253) Jan 30 13:54:14.500938 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:54:14.500947 kernel: intel_pstate: Intel P-state driver initializing Jan 30 13:54:14.500952 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 13:54:14.500958 kernel: intel_pstate: HWP enabled Jan 30 13:54:14.500964 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:54:14.500969 kernel: Segment Routing with IPv6 Jan 30 13:54:14.500975 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:54:14.500981 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:54:14.500988 kernel: Key type dns_resolver registered Jan 30 13:54:14.500994 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 13:54:14.500999 kernel: IPI shorthand broadcast: enabled Jan 30 13:54:14.501005 kernel: sched_clock: Marking stable (2761052121, 1456325044)->(4688810292, -471433127) Jan 30 13:54:14.501011 kernel: registered taskstats version 1 Jan 30 13:54:14.501016 kernel: Loading compiled-in X.509 certificates Jan 30 13:54:14.501022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:54:14.501027 kernel: Key type .fscrypt registered Jan 30 13:54:14.501033 kernel: Key type fscrypt-provisioning registered Jan 30 13:54:14.501040 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:54:14.501045 kernel: ima: No architecture policies found Jan 30 13:54:14.501051 kernel: clk: Disabling unused clocks Jan 30 13:54:14.501057 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:54:14.501063 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:54:14.501068 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:54:14.501074 kernel: Run /init as init process Jan 30 13:54:14.501080 kernel: with arguments: Jan 30 13:54:14.501086 kernel: /init Jan 30 13:54:14.501092 kernel: with environment: Jan 30 13:54:14.501097 kernel: HOME=/ Jan 30 13:54:14.501103 kernel: TERM=linux Jan 30 13:54:14.501108 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:54:14.501115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:54:14.501122 systemd[1]: Detected architecture x86-64. Jan 30 13:54:14.501129 systemd[1]: Running in initrd. Jan 30 13:54:14.501136 systemd[1]: No hostname configured, using default hostname. Jan 30 13:54:14.501141 systemd[1]: Hostname set to . Jan 30 13:54:14.501147 systemd[1]: Initializing machine ID from random generator. Jan 30 13:54:14.501153 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:54:14.501159 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:14.501165 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:14.501172 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:54:14.501178 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:54:14.501185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:54:14.501191 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:54:14.501197 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:54:14.501204 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:54:14.501210 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:14.501216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:14.501223 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:54:14.501229 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:54:14.501235 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:54:14.501240 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:54:14.501247 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:14.501252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:14.501258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:54:14.501264 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:54:14.501270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:14.501277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:14.501283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:14.501289 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:54:14.501295 kernel: tsc: Refined TSC clocksource calibration: 3407.986 MHz Jan 30 13:54:14.501301 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc6d7268, max_idle_ns: 440795260133 ns Jan 30 13:54:14.501307 kernel: clocksource: Switched to clocksource tsc Jan 30 13:54:14.501312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:54:14.501318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:54:14.501324 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:54:14.501331 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:54:14.501337 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:54:14.501354 systemd-journald[269]: Collecting audit messages is disabled. Jan 30 13:54:14.501368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:54:14.501376 systemd-journald[269]: Journal started Jan 30 13:54:14.501389 systemd-journald[269]: Runtime Journal (/run/log/journal/2f8984513cab4a7186a223b458419b42) is 8.0M, max 639.1M, 631.1M free. Jan 30 13:54:14.504011 systemd-modules-load[271]: Inserted module 'overlay' Jan 30 13:54:14.520522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:14.543428 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:54:14.543450 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:54:14.550877 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:14.550968 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:14.551052 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:54:14.552001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:54:14.556572 systemd-modules-load[271]: Inserted module 'br_netfilter' Jan 30 13:54:14.556990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:54:14.576527 kernel: Bridge firewalling registered Jan 30 13:54:14.576597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:14.658378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:14.686800 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:54:14.709860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:14.750683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:14.762064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:54:14.762569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:54:14.768397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:14.768542 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:14.769925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:54:14.779735 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:14.799084 systemd-resolved[304]: Positive Trust Anchors: Jan 30 13:54:14.799095 systemd-resolved[304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:54:14.799140 systemd-resolved[304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:54:14.799654 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:54:14.801918 systemd-resolved[304]: Defaulting to hostname 'linux'. Jan 30 13:54:14.811657 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:54:14.926642 dracut-cmdline[313]: dracut-dracut-053 Jan 30 13:54:14.926642 dracut-cmdline[313]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:54:14.818713 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:15.002490 kernel: SCSI subsystem initialized Jan 30 13:54:15.014473 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:54:15.026428 kernel: iscsi: registered transport (tcp) Jan 30 13:54:15.047184 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:54:15.047201 kernel: QLogic iSCSI HBA Driver Jan 30 13:54:15.070202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:15.096714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:54:15.137252 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:54:15.137270 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:54:15.146067 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:54:15.181485 kernel: raid6: avx2x4 gen() 46832 MB/s Jan 30 13:54:15.202462 kernel: raid6: avx2x2 gen() 53704 MB/s Jan 30 13:54:15.228501 kernel: raid6: avx2x1 gen() 45024 MB/s Jan 30 13:54:15.228518 kernel: raid6: using algorithm avx2x2 gen() 53704 MB/s Jan 30 13:54:15.255588 kernel: raid6: .... xor() 32731 MB/s, rmw enabled Jan 30 13:54:15.255605 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:54:15.276462 kernel: xor: automatically using best checksumming function avx Jan 30 13:54:15.373470 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:54:15.378940 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:15.399767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:15.435889 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jan 30 13:54:15.438332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:15.475665 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:54:15.495656 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Jan 30 13:54:15.540581 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:15.566873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:15.651610 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:15.675890 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:54:15.675911 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:54:15.676427 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:54:15.685724 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:54:15.733537 kernel: libata version 3.00 loaded. Jan 30 13:54:15.733553 kernel: PTP clock support registered Jan 30 13:54:15.733560 kernel: ACPI: bus type USB registered Jan 30 13:54:15.733574 kernel: usbcore: registered new interface driver usbfs Jan 30 13:54:15.733590 kernel: usbcore: registered new interface driver hub Jan 30 13:54:15.733598 kernel: usbcore: registered new device driver usb Jan 30 13:54:15.733607 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:54:15.733615 kernel: AES CTR mode by8 optimization enabled Jan 30 13:54:15.733622 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 13:54:15.872169 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:54:15.872348 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 30 13:54:15.872629 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 13:54:15.872848 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 13:54:15.873054 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 13:54:15.873350 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:54:15.873717 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 13:54:15.873930 kernel: scsi host0: ahci Jan 30 13:54:15.874120 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 13:54:15.874319 kernel: scsi host1: ahci Jan 30 13:54:15.874584 kernel: hub 1-0:1.0: USB hub found Jan 30 13:54:15.874829 kernel: scsi host2: ahci Jan 30 13:54:15.875024 kernel: hub 1-0:1.0: 16 ports detected Jan 30 13:54:15.875222 kernel: scsi host3: ahci Jan 30 13:54:15.875419 kernel: hub 2-0:1.0: USB hub found Jan 30 13:54:15.875717 kernel: scsi host4: ahci Jan 30 13:54:15.875908 kernel: hub 2-0:1.0: 10 ports detected Jan 30 13:54:15.876161 kernel: scsi host5: ahci Jan 30 13:54:15.876364 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 13:54:15.876383 kernel: scsi host6: ahci Jan 30 13:54:15.876641 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 13:54:15.876669 kernel: scsi host7: ahci Jan 30 13:54:15.876845 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Jan 30 13:54:15.876879 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Jan 30 13:54:15.876896 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Jan 30 13:54:15.876919 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Jan 30 13:54:15.876936 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Jan 30 13:54:15.876950 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Jan 30 13:54:15.876964 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Jan 30 13:54:15.876977 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Jan 30 13:54:15.735744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:15.984842 kernel: pps pps0: new PPS source ptp0 Jan 30 13:54:15.984929 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 30 13:54:15.985005 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:54:15.985075 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1b:6e Jan 30 13:54:15.985146 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 30 13:54:15.985218 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:54:15.985289 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Jan 30 13:54:16.452113 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:54:16.452197 kernel: pps pps1: new PPS source ptp1 Jan 30 13:54:16.452267 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 30 13:54:16.452337 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:54:16.452403 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1b:6f Jan 30 13:54:16.452476 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 30 13:54:16.452541 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:54:16.452608 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 13:54:16.618252 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618264 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618272 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:54:16.618279 kernel: hub 1-14:1.0: USB hub found Jan 30 13:54:16.618367 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:54:16.618376 kernel: hub 1-14:1.0: 4 ports detected Jan 30 13:54:16.618457 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618466 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618473 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618480 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:54:16.618553 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 13:54:16.618561 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 30 13:54:16.618626 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:54:16.618634 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:54:16.618641 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:54:16.618651 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:54:16.618659 kernel: ata1.00: Features: NCQ-prio Jan 30 13:54:16.618666 kernel: ata2.00: Features: NCQ-prio Jan 30 13:54:16.618673 kernel: ata1.00: configured for UDMA/133 Jan 30 13:54:16.618681 kernel: ata2.00: configured for UDMA/133 Jan 30 13:54:16.618688 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:54:16.618756 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:54:16.618820 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 30 13:54:16.618890 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:16.618899 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:54:16.618961 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:54:16.618969 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:54:16.619029 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:54:16.619088 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:54:16.619147 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 30 13:54:16.619215 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 13:54:16.619275 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 13:54:16.619334 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 13:54:16.619391 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:54:16.619454 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 13:54:16.619513 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 13:54:16.619571 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:54:16.619630 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:16.619641 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 13:54:16.619699 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:54:16.619763 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:54:16.619771 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Jan 30 13:54:17.000294 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 13:54:17.000763 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:54:17.001184 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:54:17.001250 kernel: GPT:9289727 != 937703087 Jan 30 13:54:17.001290 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:54:17.001326 kernel: GPT:9289727 != 937703087 Jan 30 13:54:17.001361 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:54:17.001397 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 13:54:17.001984 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:17.002030 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:54:17.002387 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (543) Jan 30 13:54:17.002462 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (552) Jan 30 13:54:17.002503 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:54:17.002540 kernel: usbcore: registered new interface driver usbhid Jan 30 13:54:17.002576 kernel: usbhid: USB HID core driver Jan 30 13:54:17.002611 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 13:54:17.002649 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:17.002685 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:17.002720 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 13:54:17.003098 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 13:54:17.003153 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 13:54:17.003669 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:54:17.004215 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 30 13:54:17.004639 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:54:17.004986 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 30 13:54:15.735850 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:15.985216 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:17.041553 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 30 13:54:15.999951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:16.000051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:16.042533 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:16.067601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:16.077714 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:17.091591 disk-uuid[710]: Primary Header is updated. Jan 30 13:54:17.091591 disk-uuid[710]: Secondary Entries is updated. Jan 30 13:54:17.091591 disk-uuid[710]: Secondary Header is updated. Jan 30 13:54:16.078173 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:16.078217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:16.078244 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:16.078685 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:54:16.129600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:16.140651 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:16.159599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:16.168657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:16.552038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 30 13:54:16.581284 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 30 13:54:16.598822 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:54:16.609523 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:54:16.624289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 13:54:16.667584 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:54:17.692650 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:54:17.700407 disk-uuid[711]: The operation has completed successfully. Jan 30 13:54:17.708532 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:17.737563 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:54:17.737614 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:54:17.786637 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:54:17.814566 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:54:17.814581 sh[741]: Success Jan 30 13:54:17.848539 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:54:17.869461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:54:17.879670 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:54:17.920916 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:54:17.921056 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:17.931663 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:54:17.938661 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:54:17.944568 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:54:17.959455 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:54:17.961517 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:54:17.969850 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:54:17.982691 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:54:18.005005 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:54:18.075520 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:18.075534 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:18.075542 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:18.075549 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:18.075556 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:54:18.075563 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:18.065890 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:54:18.078227 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:54:18.093722 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:18.127718 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:54:18.138325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:18.178367 systemd-networkd[925]: lo: Link UP Jan 30 13:54:18.178370 systemd-networkd[925]: lo: Gained carrier Jan 30 13:54:18.180942 systemd-networkd[925]: Enumeration completed Jan 30 13:54:18.195378 ignition[923]: Ignition 2.20.0 Jan 30 13:54:18.181024 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:18.195382 ignition[923]: Stage: fetch-offline Jan 30 13:54:18.181746 systemd-networkd[925]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.195403 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:18.185623 systemd[1]: Reached target network.target - Network. Jan 30 13:54:18.195408 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:18.197591 unknown[923]: fetched base config from "system" Jan 30 13:54:18.195463 ignition[923]: parsed url from cmdline: "" Jan 30 13:54:18.197595 unknown[923]: fetched user config from "system" Jan 30 13:54:18.195465 ignition[923]: no config URL provided Jan 30 13:54:18.209725 systemd-networkd[925]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.195467 ignition[923]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:54:18.214664 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:18.195489 ignition[923]: parsing config with SHA512: 4288d7dba6db387275eb3dd08f03c9fe77f76efef20b2b5d8acc66526758e403b9c5231a1368c8bd5165a24dac17c42c370a3963fd78cb0bca937cbc9e50baf7 Jan 30 13:54:18.238061 systemd-networkd[925]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.197793 ignition[923]: fetch-offline: fetch-offline passed Jan 30 13:54:18.239936 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:54:18.197795 ignition[923]: POST message to Packet Timeline Jan 30 13:54:18.252623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:54:18.197798 ignition[923]: POST Status error: resource requires networking Jan 30 13:54:18.197837 ignition[923]: Ignition finished successfully Jan 30 13:54:18.261725 ignition[937]: Ignition 2.20.0 Jan 30 13:54:18.261732 ignition[937]: Stage: kargs Jan 30 13:54:18.261885 ignition[937]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:18.446529 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 30 13:54:18.439390 systemd-networkd[925]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:18.261894 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:18.262657 ignition[937]: kargs: kargs passed Jan 30 13:54:18.262661 ignition[937]: POST message to Packet Timeline Jan 30 13:54:18.262678 ignition[937]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:18.263234 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47313->[::1]:53: read: connection refused Jan 30 13:54:18.463947 ignition[937]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 13:54:18.464660 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:32878->[::1]:53: read: connection refused Jan 30 13:54:18.658502 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 30 13:54:18.659551 systemd-networkd[925]: eno1: Link UP Jan 30 13:54:18.659794 systemd-networkd[925]: eno2: Link UP Jan 30 13:54:18.659910 systemd-networkd[925]: enp2s0f0np0: Link UP Jan 30 13:54:18.660045 systemd-networkd[925]: enp2s0f0np0: Gained carrier Jan 30 13:54:18.669614 systemd-networkd[925]: enp2s0f1np1: Link UP Jan 30 13:54:18.702628 systemd-networkd[925]: enp2s0f0np0: DHCPv4 address 147.75.90.195/31, gateway 147.75.90.194 acquired from 145.40.83.140 Jan 30 13:54:18.865600 ignition[937]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 13:54:18.866748 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45984->[::1]:53: read: connection refused Jan 30 13:54:19.467092 systemd-networkd[925]: enp2s0f1np1: Gained carrier Jan 30 13:54:19.667933 ignition[937]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 13:54:19.669156 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48396->[::1]:53: read: connection refused Jan 30 13:54:20.362916 systemd-networkd[925]: enp2s0f0np0: Gained IPv6LL Jan 30 13:54:20.618957 systemd-networkd[925]: enp2s0f1np1: Gained IPv6LL Jan 30 13:54:21.270706 ignition[937]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 13:54:21.271905 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39376->[::1]:53: read: connection refused Jan 30 13:54:24.475407 ignition[937]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 13:54:25.224796 ignition[937]: GET result: OK Jan 30 13:54:25.611021 ignition[937]: Ignition finished successfully Jan 30 13:54:25.612972 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:54:25.639712 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:54:25.645963 ignition[957]: Ignition 2.20.0 Jan 30 13:54:25.645967 ignition[957]: Stage: disks Jan 30 13:54:25.646071 ignition[957]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:25.646077 ignition[957]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:25.646572 ignition[957]: disks: disks passed Jan 30 13:54:25.646574 ignition[957]: POST message to Packet Timeline Jan 30 13:54:25.646586 ignition[957]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:26.232064 ignition[957]: GET result: OK Jan 30 13:54:26.632527 ignition[957]: Ignition finished successfully Jan 30 13:54:26.634264 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:54:26.652642 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:26.671688 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:54:26.693747 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:26.712736 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:26.729746 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:26.763696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:54:26.799911 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:54:26.809867 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:54:26.838597 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:54:26.909480 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:54:26.909909 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:54:26.917924 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:54:26.951574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:26.998469 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (983) Jan 30 13:54:26.998483 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:26.998492 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:26.998503 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:26.960342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:54:27.028513 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:27.028531 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:54:26.999096 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:54:27.029022 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 13:54:27.051580 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:54:27.099659 coreos-metadata[1000]: Jan 30 13:54:27.097 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:54:27.051598 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:27.082496 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:27.146493 coreos-metadata[1001]: Jan 30 13:54:27.097 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:54:27.107671 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:54:27.139709 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:54:27.186555 initrd-setup-root[1015]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:54:27.197540 initrd-setup-root[1022]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:54:27.207543 initrd-setup-root[1029]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:54:27.218522 initrd-setup-root[1036]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:54:27.225176 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:27.255696 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:54:27.282660 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:27.272226 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:54:27.291169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:54:27.314834 ignition[1103]: INFO : Ignition 2.20.0 Jan 30 13:54:27.314834 ignition[1103]: INFO : Stage: mount Jan 30 13:54:27.321524 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:27.321524 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:27.321524 ignition[1103]: INFO : mount: mount passed Jan 30 13:54:27.321524 ignition[1103]: INFO : POST message to Packet Timeline Jan 30 13:54:27.321524 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:27.320371 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:54:27.392663 coreos-metadata[1001]: Jan 30 13:54:27.358 INFO Fetch successful Jan 30 13:54:27.427751 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 13:54:27.427812 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 13:54:27.746134 coreos-metadata[1000]: Jan 30 13:54:27.745 INFO Fetch successful Jan 30 13:54:27.772741 coreos-metadata[1000]: Jan 30 13:54:27.772 INFO wrote hostname ci-4186.1.0-a-fe6ab79c24 to /sysroot/etc/hostname Jan 30 13:54:27.774098 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:54:27.821876 ignition[1103]: INFO : GET result: OK Jan 30 13:54:28.135350 ignition[1103]: INFO : Ignition finished successfully Jan 30 13:54:28.138373 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:54:28.170641 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:54:28.182408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:28.227205 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1127) Jan 30 13:54:28.227224 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:54:28.235370 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:28.241255 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:28.256233 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:28.256249 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:54:28.258158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:28.284054 ignition[1144]: INFO : Ignition 2.20.0 Jan 30 13:54:28.284054 ignition[1144]: INFO : Stage: files Jan 30 13:54:28.299664 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:28.299664 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:28.299664 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:54:28.299664 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:54:28.299664 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:54:28.288099 unknown[1144]: wrote ssh authorized keys file for user: core Jan 30 13:54:28.429646 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:54:28.460276 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:54:28.460276 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:28.492618 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:54:28.973344 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:54:29.187828 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:54:29.187828 ignition[1144]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:29.218631 ignition[1144]: INFO : files: files passed Jan 30 13:54:29.218631 ignition[1144]: INFO : POST message to Packet Timeline Jan 30 13:54:29.218631 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:29.787912 ignition[1144]: INFO : GET result: OK Jan 30 13:54:30.176938 ignition[1144]: INFO : Ignition finished successfully Jan 30 13:54:30.179532 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:54:30.213711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:54:30.224036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:54:30.234864 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:54:30.234926 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:54:30.294232 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:30.294232 initrd-setup-root-after-ignition[1184]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:30.332766 initrd-setup-root-after-ignition[1188]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:30.298714 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:30.309755 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:54:30.356842 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:54:30.458250 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:54:30.458538 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:54:30.479939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:54:30.499802 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:54:30.519913 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:54:30.533844 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:54:30.606886 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:30.632813 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:54:30.652215 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:30.666761 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:30.687778 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:54:30.705812 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:54:30.705968 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:30.734189 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:54:30.756124 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:54:30.774126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:54:30.793117 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:30.814103 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:30.835121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:54:30.855113 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:30.877159 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:54:30.898145 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:54:30.918123 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:54:30.935996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:54:30.936397 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:30.962234 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:30.982142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:31.002974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:54:31.003333 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:31.026017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:54:31.026420 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:31.058132 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:54:31.058631 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:31.078315 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:54:31.095983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:54:31.096390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:31.117124 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:54:31.135128 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:54:31.153091 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:54:31.153393 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:31.173139 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:54:31.173468 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:31.196235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:54:31.196676 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:31.216191 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:54:31.216607 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:54:31.353584 ignition[1208]: INFO : Ignition 2.20.0 Jan 30 13:54:31.353584 ignition[1208]: INFO : Stage: umount Jan 30 13:54:31.353584 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:31.353584 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:54:31.353584 ignition[1208]: INFO : umount: umount passed Jan 30 13:54:31.353584 ignition[1208]: INFO : POST message to Packet Timeline Jan 30 13:54:31.353584 ignition[1208]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:54:31.234202 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:54:31.234626 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:54:31.263706 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:54:31.269677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:54:31.269761 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:31.316664 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:54:31.318777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:54:31.318918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:31.345676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:54:31.345747 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:31.382892 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:54:31.385344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:54:31.385468 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:54:31.494522 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:54:31.494643 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:54:31.909500 ignition[1208]: INFO : GET result: OK Jan 30 13:54:32.244656 ignition[1208]: INFO : Ignition finished successfully Jan 30 13:54:32.247521 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:54:32.247821 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:54:32.266849 systemd[1]: Stopped target network.target - Network. Jan 30 13:54:32.281684 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:54:32.281868 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:54:32.301825 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:54:32.301995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:54:32.320860 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:54:32.321018 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:54:32.339842 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:54:32.340005 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:32.358828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:54:32.358997 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:32.378353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:54:32.395593 systemd-networkd[925]: enp2s0f0np0: DHCPv6 lease lost Jan 30 13:54:32.397886 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:54:32.407634 systemd-networkd[925]: enp2s0f1np1: DHCPv6 lease lost Jan 30 13:54:32.416459 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:54:32.416730 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:54:32.435866 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:54:32.436199 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:54:32.456246 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:54:32.456373 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:32.488606 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:54:32.514604 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:54:32.514683 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:32.533772 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:54:32.533875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:32.553838 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:54:32.554003 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:32.572818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:54:32.572983 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:32.593053 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:32.614804 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:54:32.615185 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:32.648650 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:54:32.648797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:32.654926 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:54:32.655035 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:32.682703 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:54:32.682936 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:32.714026 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:54:32.714184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:32.753628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:32.753899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:32.800777 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:54:32.805868 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:54:32.806028 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:32.836787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:33.059589 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Jan 30 13:54:32.836928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:32.856752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:54:32.857067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:54:32.939807 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:54:32.939875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:54:32.948982 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:54:32.991604 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:54:33.013790 systemd[1]: Switching root. Jan 30 13:54:33.132581 systemd-journald[269]: Journal stopped Jan 30 13:54:34.830358 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:54:34.830374 kernel: SELinux: policy capability open_perms=1 Jan 30 13:54:34.830383 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:54:34.830388 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:54:34.830394 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:54:34.830399 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:54:34.830406 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:54:34.830411 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:54:34.830418 kernel: audit: type=1403 audit(1738245273.311:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:54:34.830427 systemd[1]: Successfully loaded SELinux policy in 73.004ms. Jan 30 13:54:34.830435 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.635ms. Jan 30 13:54:34.830442 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:54:34.830452 systemd[1]: Detected architecture x86-64. Jan 30 13:54:34.830459 systemd[1]: Detected first boot. Jan 30 13:54:34.830467 systemd[1]: Hostname set to . Jan 30 13:54:34.830474 systemd[1]: Initializing machine ID from random generator. Jan 30 13:54:34.830480 zram_generator::config[1258]: No configuration found. Jan 30 13:54:34.830487 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:54:34.830494 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:54:34.830500 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:54:34.830508 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:54:34.830515 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:54:34.830521 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:54:34.830528 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:54:34.830535 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:54:34.830541 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:54:34.830548 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:54:34.830556 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:54:34.830563 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:54:34.830570 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:34.830576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:34.830583 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:54:34.830590 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:54:34.830596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:54:34.830603 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:54:34.830611 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 30 13:54:34.830618 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:34.830624 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:54:34.830631 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:54:34.830637 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:54:34.830646 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:54:34.830653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:34.830660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:34.830668 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:54:34.830675 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:54:34.830681 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:54:34.830688 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:54:34.830695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:34.830702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:34.830708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:34.830716 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:54:34.830723 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:54:34.830730 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:54:34.830738 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:54:34.830746 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:34.830754 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:54:34.830761 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:54:34.830768 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:54:34.830775 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:54:34.830782 systemd[1]: Reached target machines.target - Containers. Jan 30 13:54:34.830789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:54:34.830796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:54:34.830803 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:54:34.830811 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:54:34.830819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:54:34.830825 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:54:34.830832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:54:34.830839 kernel: ACPI: bus type drm_connector registered Jan 30 13:54:34.830845 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:54:34.830852 kernel: fuse: init (API version 7.39) Jan 30 13:54:34.830859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:54:34.830865 kernel: loop: module loaded Jan 30 13:54:34.830873 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:54:34.830880 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:54:34.830887 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:54:34.830894 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:54:34.830901 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:54:34.830908 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:54:34.830923 systemd-journald[1361]: Collecting audit messages is disabled. Jan 30 13:54:34.830939 systemd-journald[1361]: Journal started Jan 30 13:54:34.830954 systemd-journald[1361]: Runtime Journal (/run/log/journal/e8b25036ad844f8aab50e9f1bc86c993) is 8.0M, max 639.1M, 631.1M free. Jan 30 13:54:33.706475 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:54:33.723608 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:54:33.723898 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:54:34.843599 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:54:34.864470 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:54:34.875464 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:54:34.905514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:34.922485 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:54:34.922541 systemd[1]: Stopped verity-setup.service. Jan 30 13:54:34.952480 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:34.952502 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:54:34.969911 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:54:34.979571 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:54:34.989720 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:54:34.999695 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:54:35.009686 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:54:35.019703 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:54:35.029762 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:54:35.040757 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:35.051816 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:54:35.051950 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:54:35.062914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:54:35.063084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:54:35.076303 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:54:35.076669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:54:35.088344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:54:35.088702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:54:35.100303 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:54:35.100692 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:54:35.111271 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:54:35.111664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:54:35.122289 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:35.133357 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:54:35.145251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:54:35.157273 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:35.191946 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:54:35.217734 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:54:35.231220 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:54:35.240699 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:54:35.240799 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:35.243803 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:54:35.266828 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:54:35.281529 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:54:35.291989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:35.294342 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:54:35.304061 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:54:35.315549 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:54:35.316273 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:54:35.318945 systemd-journald[1361]: Time spent on flushing to /var/log/journal/e8b25036ad844f8aab50e9f1bc86c993 is 12.816ms for 1392 entries. Jan 30 13:54:35.318945 systemd-journald[1361]: System Journal (/var/log/journal/e8b25036ad844f8aab50e9f1bc86c993) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:54:35.342219 systemd-journald[1361]: Received client request to flush runtime journal. Jan 30 13:54:35.333549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:54:35.343951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:54:35.354203 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:54:35.367182 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:54:35.377428 kernel: loop0: detected capacity change from 0 to 218376 Jan 30 13:54:35.383245 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:54:35.399351 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:54:35.399427 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:54:35.410601 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:54:35.421661 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:54:35.432655 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:54:35.443595 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:54:35.452456 kernel: loop1: detected capacity change from 0 to 138184 Jan 30 13:54:35.460636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:35.470652 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:54:35.483907 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:54:35.514637 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:54:35.520503 kernel: loop2: detected capacity change from 0 to 8 Jan 30 13:54:35.531234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:54:35.543035 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:54:35.543641 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:54:35.554850 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 30 13:54:35.554860 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 30 13:54:35.555056 udevadm[1397]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:54:35.557316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:35.561467 kernel: loop3: detected capacity change from 0 to 141000 Jan 30 13:54:35.628433 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 13:54:35.653440 kernel: loop5: detected capacity change from 0 to 138184 Jan 30 13:54:35.661677 ldconfig[1387]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:54:35.663692 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:54:35.674479 kernel: loop6: detected capacity change from 0 to 8 Jan 30 13:54:35.682483 kernel: loop7: detected capacity change from 0 to 141000 Jan 30 13:54:35.723866 (sd-merge)[1416]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 30 13:54:35.724119 (sd-merge)[1416]: Merged extensions into '/usr'. Jan 30 13:54:35.726425 systemd[1]: Reloading requested from client PID 1393 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:54:35.726433 systemd[1]: Reloading... Jan 30 13:54:35.749496 zram_generator::config[1442]: No configuration found. Jan 30 13:54:35.818005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:54:35.857195 systemd[1]: Reloading finished in 130 ms. Jan 30 13:54:35.884591 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:54:35.895773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:54:35.921298 systemd[1]: Starting ensure-sysext.service... Jan 30 13:54:35.931767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:54:35.950103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:35.960222 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:54:35.960381 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:54:35.960886 systemd-tmpfiles[1500]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:54:35.961076 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Jan 30 13:54:35.961126 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Jan 30 13:54:35.962203 systemd[1]: Reloading requested from client PID 1498 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:54:35.962209 systemd[1]: Reloading... Jan 30 13:54:35.963224 systemd-tmpfiles[1500]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:54:35.963228 systemd-tmpfiles[1500]: Skipping /boot Jan 30 13:54:35.968830 systemd-tmpfiles[1500]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:54:35.968834 systemd-tmpfiles[1500]: Skipping /boot Jan 30 13:54:35.975396 systemd-udevd[1501]: Using default interface naming scheme 'v255'. Jan 30 13:54:35.991449 zram_generator::config[1528]: No configuration found. Jan 30 13:54:36.022444 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 30 13:54:36.022511 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1543) Jan 30 13:54:36.032434 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 13:54:36.037430 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:54:36.052433 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:54:36.052482 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:54:36.068591 kernel: IPMI message handler: version 39.2 Jan 30 13:54:36.068655 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 30 13:54:36.097045 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 30 13:54:36.097221 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 30 13:54:36.097345 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 30 13:54:36.097471 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 30 13:54:36.104533 kernel: ipmi device interface Jan 30 13:54:36.104984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:54:36.146475 kernel: iTCO_vendor_support: vendor-support=0 Jan 30 13:54:36.146537 kernel: ipmi_si: IPMI System Interface driver Jan 30 13:54:36.157290 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 30 13:54:36.171386 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 30 13:54:36.171400 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 30 13:54:36.171409 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 30 13:54:36.203598 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 30 13:54:36.203676 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 30 13:54:36.203749 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 30 13:54:36.203761 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 30 13:54:36.177870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 13:54:36.221628 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 30 13:54:36.221816 systemd[1]: Reloading finished in 259 ms. Jan 30 13:54:36.232430 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Jan 30 13:54:36.248492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:36.250501 kernel: intel_rapl_common: Found RAPL domain package Jan 30 13:54:36.250534 kernel: intel_rapl_common: Found RAPL domain core Jan 30 13:54:36.250545 kernel: intel_rapl_common: Found RAPL domain uncore Jan 30 13:54:36.250556 kernel: intel_rapl_common: Found RAPL domain dram Jan 30 13:54:36.281428 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 30 13:54:36.295665 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:36.315373 systemd[1]: Finished ensure-sysext.service. Jan 30 13:54:36.323430 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Jan 30 13:54:36.342277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:36.353627 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:54:36.362291 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:54:36.372581 augenrules[1703]: No rules Jan 30 13:54:36.373634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:54:36.386933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:54:36.401055 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:54:36.401430 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 30 13:54:36.409430 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 30 13:54:36.416221 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:54:36.439587 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:54:36.449585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:36.450084 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:54:36.461038 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:54:36.472375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:36.473384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:54:36.483412 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:54:36.509561 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:54:36.521001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:36.530489 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:36.531015 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:54:36.541663 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:54:36.541747 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:54:36.542074 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:54:36.542210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:54:36.542296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:54:36.542456 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:54:36.542520 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:54:36.542656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:54:36.542719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:54:36.542850 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:54:36.542911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:54:36.543041 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:54:36.543273 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:54:36.548378 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:54:36.564581 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:54:36.564625 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:54:36.564662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:54:36.565290 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:54:36.566305 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:54:36.566331 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:54:36.572548 lvm[1732]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:54:36.574933 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:54:36.600635 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:54:36.630699 systemd-resolved[1716]: Positive Trust Anchors: Jan 30 13:54:36.630705 systemd-resolved[1716]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:54:36.630730 systemd-resolved[1716]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:54:36.633834 systemd-resolved[1716]: Using system hostname 'ci-4186.1.0-a-fe6ab79c24'. Jan 30 13:54:36.639887 systemd-networkd[1715]: lo: Link UP Jan 30 13:54:36.639892 systemd-networkd[1715]: lo: Gained carrier Jan 30 13:54:36.642492 systemd-networkd[1715]: bond0: netdev ready Jan 30 13:54:36.643536 systemd-networkd[1715]: Enumeration completed Jan 30 13:54:36.649080 systemd-networkd[1715]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:a3:88.network. Jan 30 13:54:36.664684 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:54:36.675740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:54:36.685528 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:36.695637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:36.706637 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:54:36.719004 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:36.728505 systemd[1]: Reached target network.target - Network. Jan 30 13:54:36.736498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:36.747520 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:36.757557 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:54:36.768513 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:54:36.779512 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:54:36.790532 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:54:36.790550 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:54:36.798461 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:54:36.808556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:54:36.818508 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:54:36.829467 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:54:36.837945 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:54:36.848170 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:54:36.858354 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:54:36.868135 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:54:36.880216 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:54:36.882222 lvm[1757]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:54:36.891782 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:54:36.901687 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:54:36.911592 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:36.919636 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:54:36.919661 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:54:36.930516 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:54:36.940473 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 30 13:54:36.954469 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Jan 30 13:54:36.956056 systemd-networkd[1715]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:a3:89.network. Jan 30 13:54:36.956953 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:54:36.967123 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:54:36.976120 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:54:36.979678 coreos-metadata[1760]: Jan 30 13:54:36.979 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:54:36.980592 coreos-metadata[1760]: Jan 30 13:54:36.980 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jan 30 13:54:36.986221 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:54:36.986839 dbus-daemon[1761]: [system] SELinux support is enabled Jan 30 13:54:36.987981 jq[1764]: false Jan 30 13:54:36.995553 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:54:36.996225 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:54:37.003376 extend-filesystems[1766]: Found loop4 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found loop5 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found loop6 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found loop7 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda1 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda2 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda3 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found usr Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda4 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda6 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda7 Jan 30 13:54:37.018725 extend-filesystems[1766]: Found sda9 Jan 30 13:54:37.018725 extend-filesystems[1766]: Checking size of /dev/sda9 Jan 30 13:54:37.018725 extend-filesystems[1766]: Resized partition /dev/sda9 Jan 30 13:54:37.176533 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jan 30 13:54:37.176548 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1552) Jan 30 13:54:37.176561 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 30 13:54:37.176679 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Jan 30 13:54:37.176690 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 30 13:54:37.006215 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:54:37.176751 extend-filesystems[1775]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:54:37.053564 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:54:37.087576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:54:37.089878 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:54:37.118711 systemd-networkd[1715]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 30 13:54:37.119791 systemd-networkd[1715]: enp2s0f0np0: Link UP Jan 30 13:54:37.119978 systemd-networkd[1715]: enp2s0f0np0: Gained carrier Jan 30 13:54:37.136631 systemd-networkd[1715]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:a3:88.network. Jan 30 13:54:37.136797 systemd-networkd[1715]: enp2s0f1np1: Link UP Jan 30 13:54:37.136975 systemd-networkd[1715]: enp2s0f1np1: Gained carrier Jan 30 13:54:37.139360 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 30 13:54:37.146805 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:54:37.147179 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:54:37.148618 systemd-networkd[1715]: bond0: Link UP Jan 30 13:54:37.148849 systemd-networkd[1715]: bond0: Gained carrier Jan 30 13:54:37.148958 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:37.149273 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:37.149487 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:37.149572 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:37.169935 systemd-logind[1786]: Watching system buttons on /dev/input/event3 (Power Button) Jan 30 13:54:37.169944 systemd-logind[1786]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:54:37.169954 systemd-logind[1786]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 30 13:54:37.170253 systemd-logind[1786]: New seat seat0. Jan 30 13:54:37.196537 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:54:37.201348 jq[1792]: true Jan 30 13:54:37.203368 update_engine[1791]: I20250130 13:54:37.203332 1791 main.cc:92] Flatcar Update Engine starting Jan 30 13:54:37.204109 update_engine[1791]: I20250130 13:54:37.204092 1791 update_check_scheduler.cc:74] Next update check in 8m53s Jan 30 13:54:37.210752 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:54:37.221970 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:54:37.241988 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 30 13:54:37.242013 kernel: bond0: active interface up! Jan 30 13:54:37.242270 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:54:37.271720 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:54:37.271813 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:54:37.271979 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:54:37.272060 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:54:37.281966 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:54:37.282049 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:54:37.302343 jq[1795]: true Jan 30 13:54:37.303244 (ntainerd)[1796]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:54:37.307282 dbus-daemon[1761]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:54:37.307868 tar[1794]: linux-amd64/LICENSE Jan 30 13:54:37.307987 tar[1794]: linux-amd64/helm Jan 30 13:54:37.311929 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 30 13:54:37.312031 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 30 13:54:37.316827 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:54:37.327075 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:54:37.327182 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:54:37.329236 sshd_keygen[1789]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:54:37.338598 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:54:37.338680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:54:37.357451 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 30 13:54:37.367426 bash[1824]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:54:37.367610 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:54:37.379272 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:54:37.382879 locksmithd[1833]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:54:37.389725 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:54:37.412621 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:54:37.422357 systemd[1]: Starting sshkeys.service... Jan 30 13:54:37.429805 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:54:37.429912 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:54:37.450601 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:54:37.460783 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:54:37.470917 containerd[1796]: time="2025-01-30T13:54:37.470865526Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:54:37.473865 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:54:37.483365 containerd[1796]: time="2025-01-30T13:54:37.483347061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484188 containerd[1796]: time="2025-01-30T13:54:37.484172862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484218 containerd[1796]: time="2025-01-30T13:54:37.484187671Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:54:37.484218 containerd[1796]: time="2025-01-30T13:54:37.484197774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:54:37.484291 containerd[1796]: time="2025-01-30T13:54:37.484283354Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:54:37.484316 containerd[1796]: time="2025-01-30T13:54:37.484293617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484339 containerd[1796]: time="2025-01-30T13:54:37.484327574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484339 containerd[1796]: time="2025-01-30T13:54:37.484335384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484442 containerd[1796]: time="2025-01-30T13:54:37.484428652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484442 containerd[1796]: time="2025-01-30T13:54:37.484438507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484500 containerd[1796]: time="2025-01-30T13:54:37.484446084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484500 containerd[1796]: time="2025-01-30T13:54:37.484451656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484500 containerd[1796]: time="2025-01-30T13:54:37.484492767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484617 containerd[1796]: time="2025-01-30T13:54:37.484607135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484683 containerd[1796]: time="2025-01-30T13:54:37.484669694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:37.484710 containerd[1796]: time="2025-01-30T13:54:37.484682315Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:54:37.484743 containerd[1796]: time="2025-01-30T13:54:37.484734868Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:54:37.484771 containerd[1796]: time="2025-01-30T13:54:37.484762019Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:54:37.495657 containerd[1796]: time="2025-01-30T13:54:37.495644350Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:54:37.495697 containerd[1796]: time="2025-01-30T13:54:37.495670788Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:54:37.495697 containerd[1796]: time="2025-01-30T13:54:37.495680767Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:54:37.495697 containerd[1796]: time="2025-01-30T13:54:37.495689887Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:54:37.495670 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:54:37.495806 containerd[1796]: time="2025-01-30T13:54:37.495697451Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:54:37.495828 containerd[1796]: time="2025-01-30T13:54:37.495820031Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:54:37.495962 containerd[1796]: time="2025-01-30T13:54:37.495953860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:54:37.496018 containerd[1796]: time="2025-01-30T13:54:37.496010612Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:54:37.496034 containerd[1796]: time="2025-01-30T13:54:37.496020957Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:54:37.496034 containerd[1796]: time="2025-01-30T13:54:37.496029091Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:54:37.496061 containerd[1796]: time="2025-01-30T13:54:37.496036379Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496061 containerd[1796]: time="2025-01-30T13:54:37.496043550Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496061 containerd[1796]: time="2025-01-30T13:54:37.496050469Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496061 containerd[1796]: time="2025-01-30T13:54:37.496058015Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496065819Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496072420Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496078858Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496084511Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496095651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496103244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496109859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496121 containerd[1796]: time="2025-01-30T13:54:37.496116783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496123524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496130581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496137186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496144253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496151239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496160753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496168861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496175531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496184529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496195950Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496207961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496214959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496237 containerd[1796]: time="2025-01-30T13:54:37.496221034Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:54:37.496590 containerd[1796]: time="2025-01-30T13:54:37.496579970Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:54:37.496613 containerd[1796]: time="2025-01-30T13:54:37.496594713Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:54:37.496613 containerd[1796]: time="2025-01-30T13:54:37.496601184Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:54:37.496613 containerd[1796]: time="2025-01-30T13:54:37.496608161Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:54:37.496657 containerd[1796]: time="2025-01-30T13:54:37.496613287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496657 containerd[1796]: time="2025-01-30T13:54:37.496620168Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:54:37.496657 containerd[1796]: time="2025-01-30T13:54:37.496626022Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:54:37.496657 containerd[1796]: time="2025-01-30T13:54:37.496631410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:54:37.496826 containerd[1796]: time="2025-01-30T13:54:37.496801593Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:54:37.496905 containerd[1796]: time="2025-01-30T13:54:37.496830178Z" level=info msg="Connect containerd service" Jan 30 13:54:37.496905 containerd[1796]: time="2025-01-30T13:54:37.496856403Z" level=info msg="using legacy CRI server" Jan 30 13:54:37.496905 containerd[1796]: time="2025-01-30T13:54:37.496861565Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:54:37.496953 containerd[1796]: time="2025-01-30T13:54:37.496936357Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:54:37.497271 containerd[1796]: time="2025-01-30T13:54:37.497261906Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:54:37.497399 containerd[1796]: time="2025-01-30T13:54:37.497380586Z" level=info msg="Start subscribing containerd event" Jan 30 13:54:37.497417 containerd[1796]: time="2025-01-30T13:54:37.497410872Z" level=info msg="Start recovering state" Jan 30 13:54:37.497437 containerd[1796]: time="2025-01-30T13:54:37.497420866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:54:37.497459 containerd[1796]: time="2025-01-30T13:54:37.497453620Z" level=info msg="Start event monitor" Jan 30 13:54:37.497474 containerd[1796]: time="2025-01-30T13:54:37.497455472Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:54:37.497474 containerd[1796]: time="2025-01-30T13:54:37.497465267Z" level=info msg="Start snapshots syncer" Jan 30 13:54:37.497474 containerd[1796]: time="2025-01-30T13:54:37.497470694Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:54:37.497516 containerd[1796]: time="2025-01-30T13:54:37.497474872Z" level=info msg="Start streaming server" Jan 30 13:54:37.497516 containerd[1796]: time="2025-01-30T13:54:37.497508503Z" level=info msg="containerd successfully booted in 0.027116s" Jan 30 13:54:37.507156 coreos-metadata[1864]: Jan 30 13:54:37.507 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:54:37.507371 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:54:37.516339 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 30 13:54:37.525629 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:54:37.533940 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:54:37.577430 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jan 30 13:54:37.601736 extend-filesystems[1775]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:54:37.601736 extend-filesystems[1775]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 30 13:54:37.601736 extend-filesystems[1775]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jan 30 13:54:37.630579 extend-filesystems[1766]: Resized filesystem in /dev/sda9 Jan 30 13:54:37.630579 extend-filesystems[1766]: Found sdb Jan 30 13:54:37.648581 tar[1794]: linux-amd64/README.md Jan 30 13:54:37.602639 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:54:37.602747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:54:37.665225 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:54:37.980732 coreos-metadata[1760]: Jan 30 13:54:37.980 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jan 30 13:54:38.410637 systemd-networkd[1715]: bond0: Gained IPv6LL Jan 30 13:54:38.410882 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:38.604660 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:38.605138 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:38.610332 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:54:38.623508 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:54:38.655193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:54:38.668082 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:54:38.687276 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:54:39.356578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:54:39.368050 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:54:39.791816 kubelet[1895]: E0130 13:54:39.791720 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:54:39.792789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:54:39.792864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:54:40.056898 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Jan 30 13:54:40.057726 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Jan 30 13:54:41.121679 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:54:41.143726 systemd[1]: Started sshd@0-147.75.90.195:22-139.178.89.65:56900.service - OpenSSH per-connection server daemon (139.178.89.65:56900). Jan 30 13:54:41.193399 sshd[1915]: Accepted publickey for core from 139.178.89.65 port 56900 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:41.194147 sshd-session[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:41.199975 systemd-logind[1786]: New session 1 of user core. Jan 30 13:54:41.200778 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:54:41.216763 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:54:41.221928 coreos-metadata[1760]: Jan 30 13:54:41.221 INFO Fetch successful Jan 30 13:54:41.231088 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:54:41.255671 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:54:41.266391 (systemd)[1919]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:54:41.273567 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:54:41.284743 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 30 13:54:41.342075 systemd[1919]: Queued start job for default target default.target. Jan 30 13:54:41.351102 systemd[1919]: Created slice app.slice - User Application Slice. Jan 30 13:54:41.351116 systemd[1919]: Reached target paths.target - Paths. Jan 30 13:54:41.351126 systemd[1919]: Reached target timers.target - Timers. Jan 30 13:54:41.351792 systemd[1919]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:54:41.357796 systemd[1919]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:54:41.357824 systemd[1919]: Reached target sockets.target - Sockets. Jan 30 13:54:41.357833 systemd[1919]: Reached target basic.target - Basic System. Jan 30 13:54:41.357854 systemd[1919]: Reached target default.target - Main User Target. Jan 30 13:54:41.357870 systemd[1919]: Startup finished in 87ms. Jan 30 13:54:41.357938 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:54:41.374719 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:54:41.386241 coreos-metadata[1864]: Jan 30 13:54:41.386 INFO Fetch successful Jan 30 13:54:41.426675 unknown[1864]: wrote ssh authorized keys file for user: core Jan 30 13:54:41.450520 systemd[1]: Started sshd@1-147.75.90.195:22-139.178.89.65:58224.service - OpenSSH per-connection server daemon (139.178.89.65:58224). Jan 30 13:54:41.471294 update-ssh-keys[1936]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:54:41.471970 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:54:41.483390 systemd[1]: Finished sshkeys.service. Jan 30 13:54:41.501604 sshd[1938]: Accepted publickey for core from 139.178.89.65 port 58224 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:41.502346 sshd-session[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:41.505228 systemd-logind[1786]: New session 2 of user core. Jan 30 13:54:41.514611 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:54:41.579680 sshd[1943]: Connection closed by 139.178.89.65 port 58224 Jan 30 13:54:41.579829 sshd-session[1938]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:41.594109 systemd[1]: sshd@1-147.75.90.195:22-139.178.89.65:58224.service: Deactivated successfully. Jan 30 13:54:41.594950 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:54:41.595747 systemd-logind[1786]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:54:41.596456 systemd[1]: Started sshd@2-147.75.90.195:22-139.178.89.65:58236.service - OpenSSH per-connection server daemon (139.178.89.65:58236). Jan 30 13:54:41.608294 systemd-logind[1786]: Removed session 2. Jan 30 13:54:41.630083 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 30 13:54:41.642626 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:54:41.652916 systemd[1]: Startup finished in 2.973s (kernel) + 19.453s (initrd) + 8.412s (userspace) = 30.839s. Jan 30 13:54:41.656178 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 58236 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:41.658253 sshd-session[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:41.672129 systemd-logind[1786]: New session 3 of user core. Jan 30 13:54:41.674456 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:54:41.683257 agetty[1871]: failed to open credentials directory Jan 30 13:54:41.683333 agetty[1873]: failed to open credentials directory Jan 30 13:54:41.700127 login[1873]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:54:41.701523 login[1871]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:54:41.709042 systemd-logind[1786]: New session 4 of user core. Jan 30 13:54:41.721809 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:54:41.727291 systemd-logind[1786]: New session 5 of user core. Jan 30 13:54:41.729387 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:54:41.737969 sshd[1953]: Connection closed by 139.178.89.65 port 58236 Jan 30 13:54:41.738577 sshd-session[1948]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:41.741050 systemd[1]: sshd@2-147.75.90.195:22-139.178.89.65:58236.service: Deactivated successfully. Jan 30 13:54:41.741909 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:54:41.742543 systemd-logind[1786]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:54:41.743236 systemd-logind[1786]: Removed session 3. Jan 30 13:54:50.044860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:54:50.059687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:54:50.310919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:54:50.313280 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:54:50.336833 kubelet[1989]: E0130 13:54:50.336776 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:54:50.339163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:54:50.339248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:54:51.773736 systemd[1]: Started sshd@3-147.75.90.195:22-139.178.89.65:42748.service - OpenSSH per-connection server daemon (139.178.89.65:42748). Jan 30 13:54:51.805429 sshd[2009]: Accepted publickey for core from 139.178.89.65 port 42748 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:51.806203 sshd-session[2009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:51.809325 systemd-logind[1786]: New session 6 of user core. Jan 30 13:54:51.817668 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:54:51.871840 sshd[2011]: Connection closed by 139.178.89.65 port 42748 Jan 30 13:54:51.871959 sshd-session[2009]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:51.886111 systemd[1]: sshd@3-147.75.90.195:22-139.178.89.65:42748.service: Deactivated successfully. Jan 30 13:54:51.886932 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:54:51.887670 systemd-logind[1786]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:54:51.888449 systemd[1]: Started sshd@4-147.75.90.195:22-139.178.89.65:42760.service - OpenSSH per-connection server daemon (139.178.89.65:42760). Jan 30 13:54:51.889107 systemd-logind[1786]: Removed session 6. Jan 30 13:54:51.927775 sshd[2016]: Accepted publickey for core from 139.178.89.65 port 42760 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:51.928894 sshd-session[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:51.933241 systemd-logind[1786]: New session 7 of user core. Jan 30 13:54:51.950773 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:54:52.008918 sshd[2018]: Connection closed by 139.178.89.65 port 42760 Jan 30 13:54:52.009672 sshd-session[2016]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:52.026408 systemd[1]: sshd@4-147.75.90.195:22-139.178.89.65:42760.service: Deactivated successfully. Jan 30 13:54:52.030085 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:54:52.033512 systemd-logind[1786]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:54:52.054199 systemd[1]: Started sshd@5-147.75.90.195:22-139.178.89.65:42774.service - OpenSSH per-connection server daemon (139.178.89.65:42774). Jan 30 13:54:52.056576 systemd-logind[1786]: Removed session 7. Jan 30 13:54:52.114230 sshd[2023]: Accepted publickey for core from 139.178.89.65 port 42774 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:52.115100 sshd-session[2023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:52.118354 systemd-logind[1786]: New session 8 of user core. Jan 30 13:54:52.128686 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:54:52.191480 sshd[2025]: Connection closed by 139.178.89.65 port 42774 Jan 30 13:54:52.192252 sshd-session[2023]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:52.213375 systemd[1]: sshd@5-147.75.90.195:22-139.178.89.65:42774.service: Deactivated successfully. Jan 30 13:54:52.217075 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:54:52.220534 systemd-logind[1786]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:54:52.237264 systemd[1]: Started sshd@6-147.75.90.195:22-139.178.89.65:42786.service - OpenSSH per-connection server daemon (139.178.89.65:42786). Jan 30 13:54:52.239623 systemd-logind[1786]: Removed session 8. Jan 30 13:54:52.304815 sshd[2030]: Accepted publickey for core from 139.178.89.65 port 42786 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:52.305889 sshd-session[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:52.310081 systemd-logind[1786]: New session 9 of user core. Jan 30 13:54:52.320675 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:54:52.388046 sudo[2033]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:54:52.388190 sudo[2033]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:52.401151 sudo[2033]: pam_unix(sudo:session): session closed for user root Jan 30 13:54:52.401952 sshd[2032]: Connection closed by 139.178.89.65 port 42786 Jan 30 13:54:52.402173 sshd-session[2030]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:52.420551 systemd[1]: sshd@6-147.75.90.195:22-139.178.89.65:42786.service: Deactivated successfully. Jan 30 13:54:52.421591 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:54:52.422617 systemd-logind[1786]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:54:52.423587 systemd[1]: Started sshd@7-147.75.90.195:22-139.178.89.65:42800.service - OpenSSH per-connection server daemon (139.178.89.65:42800). Jan 30 13:54:52.424344 systemd-logind[1786]: Removed session 9. Jan 30 13:54:52.459478 sshd[2038]: Accepted publickey for core from 139.178.89.65 port 42800 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:52.460380 sshd-session[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:52.463305 systemd-logind[1786]: New session 10 of user core. Jan 30 13:54:52.474663 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:54:52.531356 sudo[2042]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:54:52.531502 sudo[2042]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:52.533561 sudo[2042]: pam_unix(sudo:session): session closed for user root Jan 30 13:54:52.536145 sudo[2041]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:54:52.536293 sudo[2041]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:52.550715 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:54:52.571331 augenrules[2064]: No rules Jan 30 13:54:52.571880 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:54:52.572050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:54:52.572914 sudo[2041]: pam_unix(sudo:session): session closed for user root Jan 30 13:54:52.574030 sshd[2040]: Connection closed by 139.178.89.65 port 42800 Jan 30 13:54:52.574356 sshd-session[2038]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:52.596916 systemd[1]: sshd@7-147.75.90.195:22-139.178.89.65:42800.service: Deactivated successfully. Jan 30 13:54:52.600305 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:54:52.603680 systemd-logind[1786]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:54:52.632285 systemd[1]: Started sshd@8-147.75.90.195:22-139.178.89.65:42814.service - OpenSSH per-connection server daemon (139.178.89.65:42814). Jan 30 13:54:52.635086 systemd-logind[1786]: Removed session 10. Jan 30 13:54:52.693770 sshd[2073]: Accepted publickey for core from 139.178.89.65 port 42814 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:54:52.694872 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:52.698974 systemd-logind[1786]: New session 11 of user core. Jan 30 13:54:52.716879 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:54:52.782882 sudo[2077]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:54:52.783830 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:53.178782 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:54:53.178851 (dockerd)[2106]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:54:53.464752 dockerd[2106]: time="2025-01-30T13:54:53.464661717Z" level=info msg="Starting up" Jan 30 13:54:53.543810 dockerd[2106]: time="2025-01-30T13:54:53.543761501Z" level=info msg="Loading containers: start." Jan 30 13:54:53.653474 kernel: Initializing XFRM netlink socket Jan 30 13:54:53.669020 systemd-timesyncd[1717]: Network configuration changed, trying to establish connection. Jan 30 13:54:53.738713 systemd-networkd[1715]: docker0: Link UP Jan 30 13:54:53.769557 dockerd[2106]: time="2025-01-30T13:54:53.769506638Z" level=info msg="Loading containers: done." Jan 30 13:54:53.781607 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1482113430-merged.mount: Deactivated successfully. Jan 30 13:54:53.781740 dockerd[2106]: time="2025-01-30T13:54:53.781648224Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:54:53.781740 dockerd[2106]: time="2025-01-30T13:54:53.781696617Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:54:53.781811 dockerd[2106]: time="2025-01-30T13:54:53.781753398Z" level=info msg="Daemon has completed initialization" Jan 30 13:54:53.796830 dockerd[2106]: time="2025-01-30T13:54:53.796770431Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:54:53.796904 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:54:54.400577 containerd[1796]: time="2025-01-30T13:54:54.400490493Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:54:54.896353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313087034.mount: Deactivated successfully. Jan 30 13:54:55.569773 systemd-timesyncd[1717]: Contacted time server [2603:c020:0:8369:0:ba11:ba11:ba11]:123 (2.flatcar.pool.ntp.org). Jan 30 13:54:55.569799 systemd-timesyncd[1717]: Initial clock synchronization to Thu 2025-01-30 13:54:55.655441 UTC. Jan 30 13:54:55.640076 containerd[1796]: time="2025-01-30T13:54:55.640024857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:55.640290 containerd[1796]: time="2025-01-30T13:54:55.640206374Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 13:54:55.640659 containerd[1796]: time="2025-01-30T13:54:55.640617767Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:55.642466 containerd[1796]: time="2025-01-30T13:54:55.642449402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:55.642966 containerd[1796]: time="2025-01-30T13:54:55.642953297Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.242389385s" Jan 30 13:54:55.643012 containerd[1796]: time="2025-01-30T13:54:55.642968590Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:54:55.643271 containerd[1796]: time="2025-01-30T13:54:55.643245488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:54:56.683769 containerd[1796]: time="2025-01-30T13:54:56.683742829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:56.683979 containerd[1796]: time="2025-01-30T13:54:56.683940011Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 13:54:56.684355 containerd[1796]: time="2025-01-30T13:54:56.684345017Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:56.686822 containerd[1796]: time="2025-01-30T13:54:56.686777763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:56.687316 containerd[1796]: time="2025-01-30T13:54:56.687274752Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.044011427s" Jan 30 13:54:56.687316 containerd[1796]: time="2025-01-30T13:54:56.687291279Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:54:56.687595 containerd[1796]: time="2025-01-30T13:54:56.687541922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:54:57.528903 containerd[1796]: time="2025-01-30T13:54:57.528848738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:57.529106 containerd[1796]: time="2025-01-30T13:54:57.529055344Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 13:54:57.529484 containerd[1796]: time="2025-01-30T13:54:57.529436281Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:57.530986 containerd[1796]: time="2025-01-30T13:54:57.530946250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:57.531619 containerd[1796]: time="2025-01-30T13:54:57.531581121Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 844.021436ms" Jan 30 13:54:57.531619 containerd[1796]: time="2025-01-30T13:54:57.531595880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:54:57.532021 containerd[1796]: time="2025-01-30T13:54:57.532002927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:54:58.299807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3680331672.mount: Deactivated successfully. Jan 30 13:54:58.499376 containerd[1796]: time="2025-01-30T13:54:58.499352536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:58.499647 containerd[1796]: time="2025-01-30T13:54:58.499634152Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:54:58.499975 containerd[1796]: time="2025-01-30T13:54:58.499963704Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:58.500999 containerd[1796]: time="2025-01-30T13:54:58.500958745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:58.501728 containerd[1796]: time="2025-01-30T13:54:58.501683497Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 969.664753ms" Jan 30 13:54:58.501728 containerd[1796]: time="2025-01-30T13:54:58.501699090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:54:58.501969 containerd[1796]: time="2025-01-30T13:54:58.501958470Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:54:58.984798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162600712.mount: Deactivated successfully. Jan 30 13:54:59.519807 containerd[1796]: time="2025-01-30T13:54:59.519749672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:59.520221 containerd[1796]: time="2025-01-30T13:54:59.520173551Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 13:54:59.520554 containerd[1796]: time="2025-01-30T13:54:59.520507782Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:59.522085 containerd[1796]: time="2025-01-30T13:54:59.522044546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:59.522736 containerd[1796]: time="2025-01-30T13:54:59.522695434Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.020722183s" Jan 30 13:54:59.522736 containerd[1796]: time="2025-01-30T13:54:59.522710644Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:54:59.523001 containerd[1796]: time="2025-01-30T13:54:59.522952836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:54:59.961888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694585545.mount: Deactivated successfully. Jan 30 13:54:59.963132 containerd[1796]: time="2025-01-30T13:54:59.963084162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:59.963334 containerd[1796]: time="2025-01-30T13:54:59.963315182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:54:59.963713 containerd[1796]: time="2025-01-30T13:54:59.963701135Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:59.964756 containerd[1796]: time="2025-01-30T13:54:59.964742880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:59.965601 containerd[1796]: time="2025-01-30T13:54:59.965557996Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 442.590728ms" Jan 30 13:54:59.965601 containerd[1796]: time="2025-01-30T13:54:59.965574228Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:54:59.965878 containerd[1796]: time="2025-01-30T13:54:59.965868817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:55:00.448993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:55:00.464668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:00.508561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381028405.mount: Deactivated successfully. Jan 30 13:55:00.710095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:00.712802 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:00.738963 kubelet[2472]: E0130 13:55:00.738886 2472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:00.740091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:00.740174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:01.752779 containerd[1796]: time="2025-01-30T13:55:01.752728767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:01.752991 containerd[1796]: time="2025-01-30T13:55:01.752948014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 13:55:01.753378 containerd[1796]: time="2025-01-30T13:55:01.753337330Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:01.755201 containerd[1796]: time="2025-01-30T13:55:01.755163942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:01.755940 containerd[1796]: time="2025-01-30T13:55:01.755902067Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.790019032s" Jan 30 13:55:01.755940 containerd[1796]: time="2025-01-30T13:55:01.755919936Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:55:03.706565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:03.721782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:03.735277 systemd[1]: Reloading requested from client PID 2582 ('systemctl') (unit session-11.scope)... Jan 30 13:55:03.735284 systemd[1]: Reloading... Jan 30 13:55:03.780437 zram_generator::config[2621]: No configuration found. Jan 30 13:55:03.846709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:03.908372 systemd[1]: Reloading finished in 172 ms. Jan 30 13:55:03.957818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:03.958689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:03.960151 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:55:03.960259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:03.961147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:04.205026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:04.207399 (kubelet)[2690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:04.230089 kubelet[2690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:04.230089 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:04.230089 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:04.230089 kubelet[2690]: I0130 13:55:04.230071 2690 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:04.571849 kubelet[2690]: I0130 13:55:04.571775 2690 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:55:04.571849 kubelet[2690]: I0130 13:55:04.571788 2690 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:04.571953 kubelet[2690]: I0130 13:55:04.571946 2690 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:55:04.611198 kubelet[2690]: E0130 13:55:04.611131 2690 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.90.195:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:04.612037 kubelet[2690]: I0130 13:55:04.611993 2690 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:04.618056 kubelet[2690]: E0130 13:55:04.618011 2690 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:55:04.618056 kubelet[2690]: I0130 13:55:04.618025 2690 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:55:04.627677 kubelet[2690]: I0130 13:55:04.627639 2690 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:04.628659 kubelet[2690]: I0130 13:55:04.628614 2690 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:04.628773 kubelet[2690]: I0130 13:55:04.628631 2690 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-fe6ab79c24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:55:04.628773 kubelet[2690]: I0130 13:55:04.628748 2690 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:04.628773 kubelet[2690]: I0130 13:55:04.628756 2690 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:55:04.628900 kubelet[2690]: I0130 13:55:04.628830 2690 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:04.631833 kubelet[2690]: I0130 13:55:04.631787 2690 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:55:04.631833 kubelet[2690]: I0130 13:55:04.631799 2690 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:04.631833 kubelet[2690]: I0130 13:55:04.631808 2690 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:55:04.631833 kubelet[2690]: I0130 13:55:04.631814 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:04.634663 kubelet[2690]: I0130 13:55:04.634651 2690 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:55:04.635106 kubelet[2690]: I0130 13:55:04.635067 2690 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:04.635703 kubelet[2690]: W0130 13:55:04.635662 2690 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:55:04.636767 kubelet[2690]: W0130 13:55:04.636712 2690 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-fe6ab79c24&limit=500&resourceVersion=0": dial tcp 147.75.90.195:6443: connect: connection refused Jan 30 13:55:04.636767 kubelet[2690]: E0130 13:55:04.636760 2690 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.90.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-fe6ab79c24&limit=500&resourceVersion=0\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:04.637572 kubelet[2690]: I0130 13:55:04.637536 2690 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:55:04.637572 kubelet[2690]: I0130 13:55:04.637567 2690 server.go:1287] "Started kubelet" Jan 30 13:55:04.637725 kubelet[2690]: I0130 13:55:04.637655 2690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:04.637868 kubelet[2690]: I0130 13:55:04.637861 2690 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:04.637943 kubelet[2690]: I0130 13:55:04.637882 2690 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:04.638156 kubelet[2690]: W0130 13:55:04.638135 2690 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.90.195:6443: connect: connection refused Jan 30 13:55:04.638192 kubelet[2690]: E0130 13:55:04.638165 2690 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.90.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:04.640258 kubelet[2690]: I0130 13:55:04.640244 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:04.640313 kubelet[2690]: I0130 13:55:04.640293 2690 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:55:04.641332 kubelet[2690]: I0130 13:55:04.641322 2690 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:55:04.641567 kubelet[2690]: E0130 13:55:04.640408 2690 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-fe6ab79c24\" not found" Jan 30 13:55:04.641601 kubelet[2690]: E0130 13:55:04.641565 2690 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-fe6ab79c24?timeout=10s\": dial tcp 147.75.90.195:6443: connect: connection refused" interval="200ms" Jan 30 13:55:04.641601 kubelet[2690]: I0130 13:55:04.641581 2690 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:55:04.641647 kubelet[2690]: I0130 13:55:04.641640 2690 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:04.641700 kubelet[2690]: W0130 13:55:04.641666 2690 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.195:6443: connect: connection refused Jan 30 13:55:04.641727 kubelet[2690]: E0130 13:55:04.641703 2690 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:55:04.641727 kubelet[2690]: E0130 13:55:04.641713 2690 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.90.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:04.641762 kubelet[2690]: I0130 13:55:04.641758 2690 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:55:04.641813 kubelet[2690]: I0130 13:55:04.641805 2690 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:04.641861 kubelet[2690]: I0130 13:55:04.641852 2690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:04.642311 kubelet[2690]: I0130 13:55:04.642302 2690 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:04.642421 kubelet[2690]: E0130 13:55:04.641458 2690 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.195:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.195:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-fe6ab79c24.181f7ce51b69507b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-fe6ab79c24,UID:ci-4186.1.0-a-fe6ab79c24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-fe6ab79c24,},FirstTimestamp:2025-01-30 13:55:04.637542523 +0000 UTC m=+0.428288972,LastTimestamp:2025-01-30 13:55:04.637542523 +0000 UTC m=+0.428288972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-fe6ab79c24,}" Jan 30 13:55:04.649335 kubelet[2690]: I0130 13:55:04.649313 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:04.650169 kubelet[2690]: I0130 13:55:04.650161 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:04.650224 kubelet[2690]: I0130 13:55:04.650173 2690 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:55:04.650224 kubelet[2690]: I0130 13:55:04.650198 2690 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:55:04.650224 kubelet[2690]: I0130 13:55:04.650218 2690 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:55:04.650338 kubelet[2690]: E0130 13:55:04.650274 2690 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:04.650505 kubelet[2690]: W0130 13:55:04.650491 2690 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.195:6443: connect: connection refused Jan 30 13:55:04.650540 kubelet[2690]: E0130 13:55:04.650515 2690 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.90.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:04.651363 kubelet[2690]: I0130 13:55:04.651353 2690 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:55:04.651363 kubelet[2690]: I0130 13:55:04.651361 2690 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:04.651424 kubelet[2690]: I0130 13:55:04.651370 2690 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:04.652297 kubelet[2690]: I0130 13:55:04.652289 2690 policy_none.go:49] "None policy: Start" Jan 30 13:55:04.652324 kubelet[2690]: I0130 13:55:04.652299 2690 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:55:04.652324 kubelet[2690]: I0130 13:55:04.652305 2690 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:04.655250 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:55:04.665184 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:55:04.667088 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:55:04.678091 kubelet[2690]: I0130 13:55:04.678040 2690 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:04.678203 kubelet[2690]: I0130 13:55:04.678159 2690 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:55:04.678203 kubelet[2690]: I0130 13:55:04.678166 2690 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:04.678324 kubelet[2690]: I0130 13:55:04.678285 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:04.678677 kubelet[2690]: E0130 13:55:04.678625 2690 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:55:04.678677 kubelet[2690]: E0130 13:55:04.678651 2690 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-fe6ab79c24\" not found" Jan 30 13:55:04.773456 systemd[1]: Created slice kubepods-burstable-pode91195a50d3fc5107a09f9fbcb922966.slice - libcontainer container kubepods-burstable-pode91195a50d3fc5107a09f9fbcb922966.slice. Jan 30 13:55:04.781920 kubelet[2690]: I0130 13:55:04.781865 2690 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.782685 kubelet[2690]: E0130 13:55:04.782587 2690 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.90.195:6443/api/v1/nodes\": dial tcp 147.75.90.195:6443: connect: connection refused" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.804226 kubelet[2690]: E0130 13:55:04.804132 2690 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-a-fe6ab79c24\" not found" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.813023 systemd[1]: Created slice kubepods-burstable-pod5ea97dcc280544478434c157a6d2d3e4.slice - libcontainer container kubepods-burstable-pod5ea97dcc280544478434c157a6d2d3e4.slice. Jan 30 13:55:04.817536 kubelet[2690]: E0130 13:55:04.817481 2690 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-a-fe6ab79c24\" not found" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.824450 systemd[1]: Created slice kubepods-burstable-pod19ba6b279a42c8ea3e6ff3204994373d.slice - libcontainer container kubepods-burstable-pod19ba6b279a42c8ea3e6ff3204994373d.slice. Jan 30 13:55:04.828595 kubelet[2690]: E0130 13:55:04.828499 2690 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-a-fe6ab79c24\" not found" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.842619 kubelet[2690]: E0130 13:55:04.842512 2690 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-fe6ab79c24?timeout=10s\": dial tcp 147.75.90.195:6443: connect: connection refused" interval="400ms" Jan 30 13:55:04.843798 kubelet[2690]: I0130 13:55:04.843701 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ea97dcc280544478434c157a6d2d3e4-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" (UID: \"5ea97dcc280544478434c157a6d2d3e4\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.843798 kubelet[2690]: I0130 13:55:04.843779 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ea97dcc280544478434c157a6d2d3e4-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" (UID: \"5ea97dcc280544478434c157a6d2d3e4\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.844082 kubelet[2690]: I0130 13:55:04.843835 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.844082 kubelet[2690]: I0130 13:55:04.843887 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.844082 kubelet[2690]: I0130 13:55:04.843980 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.844082 kubelet[2690]: I0130 13:55:04.844050 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e91195a50d3fc5107a09f9fbcb922966-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-fe6ab79c24\" (UID: \"e91195a50d3fc5107a09f9fbcb922966\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.844408 kubelet[2690]: I0130 13:55:04.844098 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ea97dcc280544478434c157a6d2d3e4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" (UID: \"5ea97dcc280544478434c157a6d2d3e4\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.844408 kubelet[2690]: I0130 13:55:04.844146 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.844408 kubelet[2690]: I0130 13:55:04.844186 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.987967 kubelet[2690]: I0130 13:55:04.987904 2690 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:04.988782 kubelet[2690]: E0130 13:55:04.988666 2690 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.90.195:6443/api/v1/nodes\": dial tcp 147.75.90.195:6443: connect: connection refused" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:05.106407 containerd[1796]: time="2025-01-30T13:55:05.106159395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-fe6ab79c24,Uid:e91195a50d3fc5107a09f9fbcb922966,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:05.118801 containerd[1796]: time="2025-01-30T13:55:05.118787513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-fe6ab79c24,Uid:5ea97dcc280544478434c157a6d2d3e4,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:05.129284 containerd[1796]: time="2025-01-30T13:55:05.129230720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-fe6ab79c24,Uid:19ba6b279a42c8ea3e6ff3204994373d,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:05.243215 kubelet[2690]: E0130 13:55:05.243161 2690 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-fe6ab79c24?timeout=10s\": dial tcp 147.75.90.195:6443: connect: connection refused" interval="800ms" Jan 30 13:55:05.391061 kubelet[2690]: I0130 13:55:05.390956 2690 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:05.391219 kubelet[2690]: E0130 13:55:05.391201 2690 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.90.195:6443/api/v1/nodes\": dial tcp 147.75.90.195:6443: connect: connection refused" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:05.460768 kubelet[2690]: W0130 13:55:05.460703 2690 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.90.195:6443: connect: connection refused Jan 30 13:55:05.460768 kubelet[2690]: E0130 13:55:05.460745 2690 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.90.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:05.516015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730866350.mount: Deactivated successfully. Jan 30 13:55:05.517542 containerd[1796]: time="2025-01-30T13:55:05.517475952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:05.517682 containerd[1796]: time="2025-01-30T13:55:05.517659930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:55:05.518850 containerd[1796]: time="2025-01-30T13:55:05.518835877Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:05.519247 containerd[1796]: time="2025-01-30T13:55:05.519234647Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:05.519612 containerd[1796]: time="2025-01-30T13:55:05.519593987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:05.519690 containerd[1796]: time="2025-01-30T13:55:05.519677151Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:05.519814 containerd[1796]: time="2025-01-30T13:55:05.519800933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:05.520967 containerd[1796]: time="2025-01-30T13:55:05.520953160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:05.522449 containerd[1796]: time="2025-01-30T13:55:05.522435244Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 416.052219ms" Jan 30 13:55:05.523184 containerd[1796]: time="2025-01-30T13:55:05.523169556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 404.344009ms" Jan 30 13:55:05.524141 containerd[1796]: time="2025-01-30T13:55:05.524127175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 394.861891ms" Jan 30 13:55:05.531626 kubelet[2690]: W0130 13:55:05.531566 2690 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.195:6443: connect: connection refused Jan 30 13:55:05.531626 kubelet[2690]: E0130 13:55:05.531609 2690 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.90.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:05.633446 containerd[1796]: time="2025-01-30T13:55:05.633386189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:05.633446 containerd[1796]: time="2025-01-30T13:55:05.633422552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:05.633446 containerd[1796]: time="2025-01-30T13:55:05.633434401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.633549 containerd[1796]: time="2025-01-30T13:55:05.633479782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.633589 containerd[1796]: time="2025-01-30T13:55:05.633558999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:05.633609 containerd[1796]: time="2025-01-30T13:55:05.633585824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:05.633609 containerd[1796]: time="2025-01-30T13:55:05.633580051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:05.633609 containerd[1796]: time="2025-01-30T13:55:05.633593639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.633653 containerd[1796]: time="2025-01-30T13:55:05.633604532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:05.633653 containerd[1796]: time="2025-01-30T13:55:05.633611842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.633653 containerd[1796]: time="2025-01-30T13:55:05.633637930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.633701 containerd[1796]: time="2025-01-30T13:55:05.633648060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.647179 kubelet[2690]: W0130 13:55:05.647090 2690 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.195:6443: connect: connection refused Jan 30 13:55:05.647179 kubelet[2690]: E0130 13:55:05.647127 2690 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.90.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.90.195:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:05.651722 systemd[1]: Started cri-containerd-7e5df2262236d45342f43cc473973102687d0459beda84c2ad8e5c8c6a5e64dc.scope - libcontainer container 7e5df2262236d45342f43cc473973102687d0459beda84c2ad8e5c8c6a5e64dc. Jan 30 13:55:05.652484 systemd[1]: Started cri-containerd-822f8b2998e3d4776e4ab5a29a2b474eafc6acbd04d48e9ffd0784ef6d882bb0.scope - libcontainer container 822f8b2998e3d4776e4ab5a29a2b474eafc6acbd04d48e9ffd0784ef6d882bb0. Jan 30 13:55:05.653158 systemd[1]: Started cri-containerd-83734c29bf86dde59c29a4f17941648c28a2e37d2d90d98725b0394e3134d771.scope - libcontainer container 83734c29bf86dde59c29a4f17941648c28a2e37d2d90d98725b0394e3134d771. Jan 30 13:55:05.674006 containerd[1796]: time="2025-01-30T13:55:05.673979668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-fe6ab79c24,Uid:5ea97dcc280544478434c157a6d2d3e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e5df2262236d45342f43cc473973102687d0459beda84c2ad8e5c8c6a5e64dc\"" Jan 30 13:55:05.674227 containerd[1796]: time="2025-01-30T13:55:05.674214212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-fe6ab79c24,Uid:e91195a50d3fc5107a09f9fbcb922966,Namespace:kube-system,Attempt:0,} returns sandbox id \"83734c29bf86dde59c29a4f17941648c28a2e37d2d90d98725b0394e3134d771\"" Jan 30 13:55:05.674728 containerd[1796]: time="2025-01-30T13:55:05.674716951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-fe6ab79c24,Uid:19ba6b279a42c8ea3e6ff3204994373d,Namespace:kube-system,Attempt:0,} returns sandbox id \"822f8b2998e3d4776e4ab5a29a2b474eafc6acbd04d48e9ffd0784ef6d882bb0\"" Jan 30 13:55:05.675698 containerd[1796]: time="2025-01-30T13:55:05.675682483Z" level=info msg="CreateContainer within sandbox \"83734c29bf86dde59c29a4f17941648c28a2e37d2d90d98725b0394e3134d771\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:55:05.675765 containerd[1796]: time="2025-01-30T13:55:05.675746737Z" level=info msg="CreateContainer within sandbox \"822f8b2998e3d4776e4ab5a29a2b474eafc6acbd04d48e9ffd0784ef6d882bb0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:55:05.675948 containerd[1796]: time="2025-01-30T13:55:05.675746004Z" level=info msg="CreateContainer within sandbox \"7e5df2262236d45342f43cc473973102687d0459beda84c2ad8e5c8c6a5e64dc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:55:05.682646 containerd[1796]: time="2025-01-30T13:55:05.682602607Z" level=info msg="CreateContainer within sandbox \"83734c29bf86dde59c29a4f17941648c28a2e37d2d90d98725b0394e3134d771\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5e72320168360fd5f54c3d9b9512c3d51e6a7508553cc3ded75e8a1232c6371\"" Jan 30 13:55:05.682906 containerd[1796]: time="2025-01-30T13:55:05.682880850Z" level=info msg="StartContainer for \"a5e72320168360fd5f54c3d9b9512c3d51e6a7508553cc3ded75e8a1232c6371\"" Jan 30 13:55:05.683596 containerd[1796]: time="2025-01-30T13:55:05.683553811Z" level=info msg="CreateContainer within sandbox \"822f8b2998e3d4776e4ab5a29a2b474eafc6acbd04d48e9ffd0784ef6d882bb0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fed4e1dad209a517f5e9c06822d70a3908d14fc13b01efff87ccc1b504bf7f7e\"" Jan 30 13:55:05.683743 containerd[1796]: time="2025-01-30T13:55:05.683710729Z" level=info msg="StartContainer for \"fed4e1dad209a517f5e9c06822d70a3908d14fc13b01efff87ccc1b504bf7f7e\"" Jan 30 13:55:05.683975 containerd[1796]: time="2025-01-30T13:55:05.683939668Z" level=info msg="CreateContainer within sandbox \"7e5df2262236d45342f43cc473973102687d0459beda84c2ad8e5c8c6a5e64dc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dbeb307b01d109f1a91d4f6ed8d5adb234280af23a96f9bfb3caa65b4d778d69\"" Jan 30 13:55:05.684114 containerd[1796]: time="2025-01-30T13:55:05.684094483Z" level=info msg="StartContainer for \"dbeb307b01d109f1a91d4f6ed8d5adb234280af23a96f9bfb3caa65b4d778d69\"" Jan 30 13:55:05.705743 systemd[1]: Started cri-containerd-a5e72320168360fd5f54c3d9b9512c3d51e6a7508553cc3ded75e8a1232c6371.scope - libcontainer container a5e72320168360fd5f54c3d9b9512c3d51e6a7508553cc3ded75e8a1232c6371. Jan 30 13:55:05.706315 systemd[1]: Started cri-containerd-dbeb307b01d109f1a91d4f6ed8d5adb234280af23a96f9bfb3caa65b4d778d69.scope - libcontainer container dbeb307b01d109f1a91d4f6ed8d5adb234280af23a96f9bfb3caa65b4d778d69. Jan 30 13:55:05.706891 systemd[1]: Started cri-containerd-fed4e1dad209a517f5e9c06822d70a3908d14fc13b01efff87ccc1b504bf7f7e.scope - libcontainer container fed4e1dad209a517f5e9c06822d70a3908d14fc13b01efff87ccc1b504bf7f7e. Jan 30 13:55:05.740799 containerd[1796]: time="2025-01-30T13:55:05.740756253Z" level=info msg="StartContainer for \"fed4e1dad209a517f5e9c06822d70a3908d14fc13b01efff87ccc1b504bf7f7e\" returns successfully" Jan 30 13:55:05.740904 containerd[1796]: time="2025-01-30T13:55:05.740826255Z" level=info msg="StartContainer for \"dbeb307b01d109f1a91d4f6ed8d5adb234280af23a96f9bfb3caa65b4d778d69\" returns successfully" Jan 30 13:55:05.740904 containerd[1796]: time="2025-01-30T13:55:05.740856073Z" level=info msg="StartContainer for \"a5e72320168360fd5f54c3d9b9512c3d51e6a7508553cc3ded75e8a1232c6371\" returns successfully" Jan 30 13:55:06.172996 kubelet[2690]: E0130 13:55:06.172975 2690 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-fe6ab79c24\" not found" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.193115 kubelet[2690]: I0130 13:55:06.193105 2690 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.200828 kubelet[2690]: I0130 13:55:06.200818 2690 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.200868 kubelet[2690]: E0130 13:55:06.200834 2690 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4186.1.0-a-fe6ab79c24\": node \"ci-4186.1.0-a-fe6ab79c24\" not found" Jan 30 13:55:06.241170 kubelet[2690]: I0130 13:55:06.241130 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.294228 kubelet[2690]: E0130 13:55:06.294175 2690 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186.1.0-a-fe6ab79c24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.294228 kubelet[2690]: I0130 13:55:06.294200 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.295124 kubelet[2690]: E0130 13:55:06.295086 2690 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.295124 kubelet[2690]: I0130 13:55:06.295098 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.295906 kubelet[2690]: E0130 13:55:06.295867 2690 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.633947 kubelet[2690]: I0130 13:55:06.633843 2690 apiserver.go:52] "Watching apiserver" Jan 30 13:55:06.641747 kubelet[2690]: I0130 13:55:06.641662 2690 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:55:06.658022 kubelet[2690]: I0130 13:55:06.657935 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.660649 kubelet[2690]: I0130 13:55:06.660571 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.662770 kubelet[2690]: E0130 13:55:06.662697 2690 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.665274 kubelet[2690]: E0130 13:55:06.665216 2690 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186.1.0-a-fe6ab79c24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.666246 kubelet[2690]: I0130 13:55:06.666172 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:06.670058 kubelet[2690]: E0130 13:55:06.669956 2690 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:07.669097 kubelet[2690]: I0130 13:55:07.669045 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:07.669847 kubelet[2690]: I0130 13:55:07.669264 2690 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:07.675875 kubelet[2690]: W0130 13:55:07.675818 2690 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:07.676278 kubelet[2690]: W0130 13:55:07.676216 2690 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:08.654570 systemd[1]: Reloading requested from client PID 3006 ('systemctl') (unit session-11.scope)... Jan 30 13:55:08.654578 systemd[1]: Reloading... Jan 30 13:55:08.694443 zram_generator::config[3045]: No configuration found. Jan 30 13:55:08.763173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:08.832244 systemd[1]: Reloading finished in 177 ms. Jan 30 13:55:08.864189 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:08.864267 kubelet[2690]: I0130 13:55:08.864208 2690 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:08.869904 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:55:08.870016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:08.880878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:09.122228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:09.124977 (kubelet)[3109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:09.145194 kubelet[3109]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:09.145194 kubelet[3109]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:09.145194 kubelet[3109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:09.145456 kubelet[3109]: I0130 13:55:09.145212 3109 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:09.149865 kubelet[3109]: I0130 13:55:09.149849 3109 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:55:09.149865 kubelet[3109]: I0130 13:55:09.149863 3109 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:09.150065 kubelet[3109]: I0130 13:55:09.150054 3109 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:55:09.150917 kubelet[3109]: I0130 13:55:09.150906 3109 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:55:09.152313 kubelet[3109]: I0130 13:55:09.152300 3109 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:09.154057 kubelet[3109]: E0130 13:55:09.154013 3109 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:55:09.154057 kubelet[3109]: I0130 13:55:09.154030 3109 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:55:09.161632 kubelet[3109]: I0130 13:55:09.161615 3109 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:09.161782 kubelet[3109]: I0130 13:55:09.161739 3109 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:09.161914 kubelet[3109]: I0130 13:55:09.161759 3109 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-fe6ab79c24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:55:09.161914 kubelet[3109]: I0130 13:55:09.161890 3109 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:09.161914 kubelet[3109]: I0130 13:55:09.161898 3109 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:55:09.162061 kubelet[3109]: I0130 13:55:09.162006 3109 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:09.162586 kubelet[3109]: I0130 13:55:09.162315 3109 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:55:09.162586 kubelet[3109]: I0130 13:55:09.162341 3109 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:09.162586 kubelet[3109]: I0130 13:55:09.162367 3109 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:55:09.162586 kubelet[3109]: I0130 13:55:09.162377 3109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:09.163732 kubelet[3109]: I0130 13:55:09.163712 3109 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:55:09.164153 kubelet[3109]: I0130 13:55:09.164143 3109 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:09.164622 kubelet[3109]: I0130 13:55:09.164612 3109 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:55:09.164660 kubelet[3109]: I0130 13:55:09.164641 3109 server.go:1287] "Started kubelet" Jan 30 13:55:09.164713 kubelet[3109]: I0130 13:55:09.164695 3109 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:09.164775 kubelet[3109]: I0130 13:55:09.164725 3109 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:09.164930 kubelet[3109]: I0130 13:55:09.164918 3109 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:09.165539 kubelet[3109]: I0130 13:55:09.165529 3109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:09.165613 kubelet[3109]: I0130 13:55:09.165597 3109 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:55:09.165678 kubelet[3109]: E0130 13:55:09.165638 3109 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-fe6ab79c24\" not found" Jan 30 13:55:09.165678 kubelet[3109]: I0130 13:55:09.165644 3109 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:55:09.165678 kubelet[3109]: I0130 13:55:09.165658 3109 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:55:09.165860 kubelet[3109]: I0130 13:55:09.165849 3109 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:09.166174 kubelet[3109]: I0130 13:55:09.166156 3109 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:55:09.166670 kubelet[3109]: I0130 13:55:09.166645 3109 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:09.167008 kubelet[3109]: E0130 13:55:09.166961 3109 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:55:09.167995 kubelet[3109]: I0130 13:55:09.167981 3109 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:09.167995 kubelet[3109]: I0130 13:55:09.167995 3109 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:09.173055 kubelet[3109]: I0130 13:55:09.173018 3109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:09.173844 kubelet[3109]: I0130 13:55:09.173824 3109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:09.173844 kubelet[3109]: I0130 13:55:09.173848 3109 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:55:09.173970 kubelet[3109]: I0130 13:55:09.173865 3109 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:55:09.173970 kubelet[3109]: I0130 13:55:09.173871 3109 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:55:09.173970 kubelet[3109]: E0130 13:55:09.173907 3109 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:09.188077 kubelet[3109]: I0130 13:55:09.188028 3109 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:55:09.188077 kubelet[3109]: I0130 13:55:09.188041 3109 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:09.188077 kubelet[3109]: I0130 13:55:09.188054 3109 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:09.188219 kubelet[3109]: I0130 13:55:09.188177 3109 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:55:09.188219 kubelet[3109]: I0130 13:55:09.188187 3109 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:55:09.188219 kubelet[3109]: I0130 13:55:09.188202 3109 policy_none.go:49] "None policy: Start" Jan 30 13:55:09.188219 kubelet[3109]: I0130 13:55:09.188209 3109 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:55:09.188219 kubelet[3109]: I0130 13:55:09.188217 3109 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:09.188325 kubelet[3109]: I0130 13:55:09.188298 3109 state_mem.go:75] "Updated machine memory state" Jan 30 13:55:09.191016 kubelet[3109]: I0130 13:55:09.190979 3109 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:09.191104 kubelet[3109]: I0130 13:55:09.191095 3109 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:55:09.191153 kubelet[3109]: I0130 13:55:09.191105 3109 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:09.191195 kubelet[3109]: I0130 13:55:09.191176 3109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:09.191618 kubelet[3109]: E0130 13:55:09.191569 3109 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:55:09.275238 kubelet[3109]: I0130 13:55:09.275211 3109 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.275362 kubelet[3109]: I0130 13:55:09.275286 3109 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.275362 kubelet[3109]: I0130 13:55:09.275309 3109 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.277931 kubelet[3109]: W0130 13:55:09.277919 3109 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:09.277979 kubelet[3109]: W0130 13:55:09.277945 3109 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:09.277979 kubelet[3109]: E0130 13:55:09.277962 3109 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186.1.0-a-fe6ab79c24\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.278086 kubelet[3109]: W0130 13:55:09.278055 3109 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:09.278086 kubelet[3109]: E0130 13:55:09.278072 3109 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.293097 kubelet[3109]: I0130 13:55:09.293051 3109 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.296846 kubelet[3109]: I0130 13:55:09.296831 3109 kubelet_node_status.go:125] "Node was previously registered" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.296956 kubelet[3109]: I0130 13:55:09.296882 3109 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.466781 kubelet[3109]: I0130 13:55:09.466755 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ea97dcc280544478434c157a6d2d3e4-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" (UID: \"5ea97dcc280544478434c157a6d2d3e4\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.466781 kubelet[3109]: I0130 13:55:09.466781 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ea97dcc280544478434c157a6d2d3e4-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" (UID: \"5ea97dcc280544478434c157a6d2d3e4\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.466906 kubelet[3109]: I0130 13:55:09.466795 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ea97dcc280544478434c157a6d2d3e4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-fe6ab79c24\" (UID: \"5ea97dcc280544478434c157a6d2d3e4\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.466906 kubelet[3109]: I0130 13:55:09.466807 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.466906 kubelet[3109]: I0130 13:55:09.466817 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.466906 kubelet[3109]: I0130 13:55:09.466826 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.466906 kubelet[3109]: I0130 13:55:09.466837 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.467007 kubelet[3109]: I0130 13:55:09.466854 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19ba6b279a42c8ea3e6ff3204994373d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-fe6ab79c24\" (UID: \"19ba6b279a42c8ea3e6ff3204994373d\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:09.467007 kubelet[3109]: I0130 13:55:09.466870 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e91195a50d3fc5107a09f9fbcb922966-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-fe6ab79c24\" (UID: \"e91195a50d3fc5107a09f9fbcb922966\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:10.163578 kubelet[3109]: I0130 13:55:10.163558 3109 apiserver.go:52] "Watching apiserver" Jan 30 13:55:10.165731 kubelet[3109]: I0130 13:55:10.165694 3109 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:55:10.179812 kubelet[3109]: I0130 13:55:10.179764 3109 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:10.186590 kubelet[3109]: W0130 13:55:10.186533 3109 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:10.186669 kubelet[3109]: E0130 13:55:10.186608 3109 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186.1.0-a-fe6ab79c24\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:10.195429 kubelet[3109]: I0130 13:55:10.195390 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-fe6ab79c24" podStartSLOduration=3.195368164 podStartE2EDuration="3.195368164s" podCreationTimestamp="2025-01-30 13:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:10.195350675 +0000 UTC m=+1.068530623" watchObservedRunningTime="2025-01-30 13:55:10.195368164 +0000 UTC m=+1.068548112" Jan 30 13:55:10.202409 kubelet[3109]: I0130 13:55:10.202389 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-fe6ab79c24" podStartSLOduration=1.202380013 podStartE2EDuration="1.202380013s" podCreationTimestamp="2025-01-30 13:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:10.198919584 +0000 UTC m=+1.072099532" watchObservedRunningTime="2025-01-30 13:55:10.202380013 +0000 UTC m=+1.075559962" Jan 30 13:55:10.202480 kubelet[3109]: I0130 13:55:10.202439 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-fe6ab79c24" podStartSLOduration=3.202435462 podStartE2EDuration="3.202435462s" podCreationTimestamp="2025-01-30 13:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:10.202408761 +0000 UTC m=+1.075588708" watchObservedRunningTime="2025-01-30 13:55:10.202435462 +0000 UTC m=+1.075615406" Jan 30 13:55:13.361048 sudo[2077]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:13.361761 sshd[2076]: Connection closed by 139.178.89.65 port 42814 Jan 30 13:55:13.361882 sshd-session[2073]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:13.363258 systemd[1]: sshd@8-147.75.90.195:22-139.178.89.65:42814.service: Deactivated successfully. Jan 30 13:55:13.364158 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:55:13.364239 systemd[1]: session-11.scope: Consumed 3.626s CPU time, 164.8M memory peak, 0B memory swap peak. Jan 30 13:55:13.364881 systemd-logind[1786]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:55:13.365407 systemd-logind[1786]: Removed session 11. Jan 30 13:55:14.545168 kubelet[3109]: I0130 13:55:14.545105 3109 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:55:14.546115 containerd[1796]: time="2025-01-30T13:55:14.545858733Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:55:14.546838 kubelet[3109]: I0130 13:55:14.546378 3109 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:55:15.329344 systemd[1]: Created slice kubepods-besteffort-podb10091ed_3a60_4215_b9c2_75e26f905611.slice - libcontainer container kubepods-besteffort-podb10091ed_3a60_4215_b9c2_75e26f905611.slice. Jan 30 13:55:15.413104 kubelet[3109]: I0130 13:55:15.412986 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b10091ed-3a60-4215-b9c2-75e26f905611-xtables-lock\") pod \"kube-proxy-n6fr5\" (UID: \"b10091ed-3a60-4215-b9c2-75e26f905611\") " pod="kube-system/kube-proxy-n6fr5" Jan 30 13:55:15.413104 kubelet[3109]: I0130 13:55:15.413073 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf97q\" (UniqueName: \"kubernetes.io/projected/b10091ed-3a60-4215-b9c2-75e26f905611-kube-api-access-mf97q\") pod \"kube-proxy-n6fr5\" (UID: \"b10091ed-3a60-4215-b9c2-75e26f905611\") " pod="kube-system/kube-proxy-n6fr5" Jan 30 13:55:15.413437 kubelet[3109]: I0130 13:55:15.413127 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b10091ed-3a60-4215-b9c2-75e26f905611-lib-modules\") pod \"kube-proxy-n6fr5\" (UID: \"b10091ed-3a60-4215-b9c2-75e26f905611\") " pod="kube-system/kube-proxy-n6fr5" Jan 30 13:55:15.413437 kubelet[3109]: I0130 13:55:15.413180 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b10091ed-3a60-4215-b9c2-75e26f905611-kube-proxy\") pod \"kube-proxy-n6fr5\" (UID: \"b10091ed-3a60-4215-b9c2-75e26f905611\") " pod="kube-system/kube-proxy-n6fr5" Jan 30 13:55:15.649229 containerd[1796]: time="2025-01-30T13:55:15.649030613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6fr5,Uid:b10091ed-3a60-4215-b9c2-75e26f905611,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:15.680557 systemd[1]: Created slice kubepods-besteffort-pod5b63b87e_df62_4ea1_9a82_85ccadd80bb4.slice - libcontainer container kubepods-besteffort-pod5b63b87e_df62_4ea1_9a82_85ccadd80bb4.slice. Jan 30 13:55:15.714933 kubelet[3109]: I0130 13:55:15.714864 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55v8l\" (UniqueName: \"kubernetes.io/projected/5b63b87e-df62-4ea1-9a82-85ccadd80bb4-kube-api-access-55v8l\") pod \"tigera-operator-7d68577dc5-4p69k\" (UID: \"5b63b87e-df62-4ea1-9a82-85ccadd80bb4\") " pod="tigera-operator/tigera-operator-7d68577dc5-4p69k" Jan 30 13:55:15.715659 kubelet[3109]: I0130 13:55:15.714976 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5b63b87e-df62-4ea1-9a82-85ccadd80bb4-var-lib-calico\") pod \"tigera-operator-7d68577dc5-4p69k\" (UID: \"5b63b87e-df62-4ea1-9a82-85ccadd80bb4\") " pod="tigera-operator/tigera-operator-7d68577dc5-4p69k" Jan 30 13:55:15.983513 containerd[1796]: time="2025-01-30T13:55:15.983376953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-4p69k,Uid:5b63b87e-df62-4ea1-9a82-85ccadd80bb4,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:55:16.190132 containerd[1796]: time="2025-01-30T13:55:16.190050796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:16.190279 containerd[1796]: time="2025-01-30T13:55:16.190265554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:16.190302 containerd[1796]: time="2025-01-30T13:55:16.190279705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:16.190330 containerd[1796]: time="2025-01-30T13:55:16.190321495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:16.191278 containerd[1796]: time="2025-01-30T13:55:16.191243307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:16.191278 containerd[1796]: time="2025-01-30T13:55:16.191271771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:16.191366 containerd[1796]: time="2025-01-30T13:55:16.191279414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:16.191366 containerd[1796]: time="2025-01-30T13:55:16.191319041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:16.208727 systemd[1]: Started cri-containerd-3c92c1564797fc8d6bd77e52464aba57aa9f911c5379d46f7bbf8fe669c174a8.scope - libcontainer container 3c92c1564797fc8d6bd77e52464aba57aa9f911c5379d46f7bbf8fe669c174a8. Jan 30 13:55:16.210525 systemd[1]: Started cri-containerd-670724c3bc0d937c80f8454629d8819124973814f419fdf243c5f91922ff0e89.scope - libcontainer container 670724c3bc0d937c80f8454629d8819124973814f419fdf243c5f91922ff0e89. Jan 30 13:55:16.219455 containerd[1796]: time="2025-01-30T13:55:16.219432344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6fr5,Uid:b10091ed-3a60-4215-b9c2-75e26f905611,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c92c1564797fc8d6bd77e52464aba57aa9f911c5379d46f7bbf8fe669c174a8\"" Jan 30 13:55:16.220715 containerd[1796]: time="2025-01-30T13:55:16.220700602Z" level=info msg="CreateContainer within sandbox \"3c92c1564797fc8d6bd77e52464aba57aa9f911c5379d46f7bbf8fe669c174a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:55:16.226440 containerd[1796]: time="2025-01-30T13:55:16.226395336Z" level=info msg="CreateContainer within sandbox \"3c92c1564797fc8d6bd77e52464aba57aa9f911c5379d46f7bbf8fe669c174a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b07a496c44964f6fba62c911a81b396b2b6aa78069b0701e0fe0b0633b6a419\"" Jan 30 13:55:16.226699 containerd[1796]: time="2025-01-30T13:55:16.226645669Z" level=info msg="StartContainer for \"6b07a496c44964f6fba62c911a81b396b2b6aa78069b0701e0fe0b0633b6a419\"" Jan 30 13:55:16.234041 containerd[1796]: time="2025-01-30T13:55:16.233979448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-4p69k,Uid:5b63b87e-df62-4ea1-9a82-85ccadd80bb4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"670724c3bc0d937c80f8454629d8819124973814f419fdf243c5f91922ff0e89\"" Jan 30 13:55:16.234896 containerd[1796]: time="2025-01-30T13:55:16.234882959Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:55:16.250696 systemd[1]: Started cri-containerd-6b07a496c44964f6fba62c911a81b396b2b6aa78069b0701e0fe0b0633b6a419.scope - libcontainer container 6b07a496c44964f6fba62c911a81b396b2b6aa78069b0701e0fe0b0633b6a419. Jan 30 13:55:16.265737 containerd[1796]: time="2025-01-30T13:55:16.265708326Z" level=info msg="StartContainer for \"6b07a496c44964f6fba62c911a81b396b2b6aa78069b0701e0fe0b0633b6a419\" returns successfully" Jan 30 13:55:17.199344 kubelet[3109]: I0130 13:55:17.199313 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n6fr5" podStartSLOduration=2.199303109 podStartE2EDuration="2.199303109s" podCreationTimestamp="2025-01-30 13:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:17.199286087 +0000 UTC m=+8.072466035" watchObservedRunningTime="2025-01-30 13:55:17.199303109 +0000 UTC m=+8.072483055" Jan 30 13:55:17.790970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902398761.mount: Deactivated successfully. Jan 30 13:55:18.016669 containerd[1796]: time="2025-01-30T13:55:18.016644870Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:18.016878 containerd[1796]: time="2025-01-30T13:55:18.016858973Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:55:18.017195 containerd[1796]: time="2025-01-30T13:55:18.017185428Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:18.018201 containerd[1796]: time="2025-01-30T13:55:18.018188928Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:18.018682 containerd[1796]: time="2025-01-30T13:55:18.018672254Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.78377384s" Jan 30 13:55:18.018705 containerd[1796]: time="2025-01-30T13:55:18.018687050Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:55:18.019609 containerd[1796]: time="2025-01-30T13:55:18.019597987Z" level=info msg="CreateContainer within sandbox \"670724c3bc0d937c80f8454629d8819124973814f419fdf243c5f91922ff0e89\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:55:18.023191 containerd[1796]: time="2025-01-30T13:55:18.023148256Z" level=info msg="CreateContainer within sandbox \"670724c3bc0d937c80f8454629d8819124973814f419fdf243c5f91922ff0e89\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c50cf4affbbb5e3a0d9709012cb6055404bb1259591e930004ad2d5d545a7fd\"" Jan 30 13:55:18.023353 containerd[1796]: time="2025-01-30T13:55:18.023342768Z" level=info msg="StartContainer for \"3c50cf4affbbb5e3a0d9709012cb6055404bb1259591e930004ad2d5d545a7fd\"" Jan 30 13:55:18.047591 systemd[1]: Started cri-containerd-3c50cf4affbbb5e3a0d9709012cb6055404bb1259591e930004ad2d5d545a7fd.scope - libcontainer container 3c50cf4affbbb5e3a0d9709012cb6055404bb1259591e930004ad2d5d545a7fd. Jan 30 13:55:18.059199 containerd[1796]: time="2025-01-30T13:55:18.059174474Z" level=info msg="StartContainer for \"3c50cf4affbbb5e3a0d9709012cb6055404bb1259591e930004ad2d5d545a7fd\" returns successfully" Jan 30 13:55:18.213301 kubelet[3109]: I0130 13:55:18.213151 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-4p69k" podStartSLOduration=1.428720036 podStartE2EDuration="3.213112808s" podCreationTimestamp="2025-01-30 13:55:15 +0000 UTC" firstStartedPulling="2025-01-30 13:55:16.234645727 +0000 UTC m=+7.107825676" lastFinishedPulling="2025-01-30 13:55:18.019038496 +0000 UTC m=+8.892218448" observedRunningTime="2025-01-30 13:55:18.212823356 +0000 UTC m=+9.086003366" watchObservedRunningTime="2025-01-30 13:55:18.213112808 +0000 UTC m=+9.086292807" Jan 30 13:55:21.085537 systemd[1]: Created slice kubepods-besteffort-pod1a7c43d3_3cc2_4713_b702_df624f41af06.slice - libcontainer container kubepods-besteffort-pod1a7c43d3_3cc2_4713_b702_df624f41af06.slice. Jan 30 13:55:21.092465 systemd[1]: Created slice kubepods-besteffort-pod44965af7_38b6_4220_ba83_b88151efb3db.slice - libcontainer container kubepods-besteffort-pod44965af7_38b6_4220_ba83_b88151efb3db.slice. Jan 30 13:55:21.100863 kubelet[3109]: E0130 13:55:21.100825 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:21.155390 kubelet[3109]: I0130 13:55:21.155369 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-var-run-calico\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155390 kubelet[3109]: I0130 13:55:21.155391 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-var-lib-calico\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155502 kubelet[3109]: I0130 13:55:21.155403 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b153ff53-b790-4ffe-82ac-a800a8f52eef-kubelet-dir\") pod \"csi-node-driver-jnwwl\" (UID: \"b153ff53-b790-4ffe-82ac-a800a8f52eef\") " pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:21.155502 kubelet[3109]: I0130 13:55:21.155416 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhmm\" (UniqueName: \"kubernetes.io/projected/1a7c43d3-3cc2-4713-b702-df624f41af06-kube-api-access-pjhmm\") pod \"calico-typha-b54fd48f5-wgvl5\" (UID: \"1a7c43d3-3cc2-4713-b702-df624f41af06\") " pod="calico-system/calico-typha-b54fd48f5-wgvl5" Jan 30 13:55:21.155502 kubelet[3109]: I0130 13:55:21.155434 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-xtables-lock\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155502 kubelet[3109]: I0130 13:55:21.155445 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-policysync\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155502 kubelet[3109]: I0130 13:55:21.155454 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b153ff53-b790-4ffe-82ac-a800a8f52eef-socket-dir\") pod \"csi-node-driver-jnwwl\" (UID: \"b153ff53-b790-4ffe-82ac-a800a8f52eef\") " pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:21.155601 kubelet[3109]: I0130 13:55:21.155465 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-lib-modules\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155601 kubelet[3109]: I0130 13:55:21.155475 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndrnb\" (UniqueName: \"kubernetes.io/projected/44965af7-38b6-4220-ba83-b88151efb3db-kube-api-access-ndrnb\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155601 kubelet[3109]: I0130 13:55:21.155485 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44965af7-38b6-4220-ba83-b88151efb3db-tigera-ca-bundle\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155601 kubelet[3109]: I0130 13:55:21.155496 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/44965af7-38b6-4220-ba83-b88151efb3db-node-certs\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155601 kubelet[3109]: I0130 13:55:21.155506 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-flexvol-driver-host\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155695 kubelet[3109]: I0130 13:55:21.155517 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-cni-log-dir\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155695 kubelet[3109]: I0130 13:55:21.155538 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fw5t\" (UniqueName: \"kubernetes.io/projected/b153ff53-b790-4ffe-82ac-a800a8f52eef-kube-api-access-5fw5t\") pod \"csi-node-driver-jnwwl\" (UID: \"b153ff53-b790-4ffe-82ac-a800a8f52eef\") " pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:21.155695 kubelet[3109]: I0130 13:55:21.155560 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7c43d3-3cc2-4713-b702-df624f41af06-tigera-ca-bundle\") pod \"calico-typha-b54fd48f5-wgvl5\" (UID: \"1a7c43d3-3cc2-4713-b702-df624f41af06\") " pod="calico-system/calico-typha-b54fd48f5-wgvl5" Jan 30 13:55:21.155695 kubelet[3109]: I0130 13:55:21.155571 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1a7c43d3-3cc2-4713-b702-df624f41af06-typha-certs\") pod \"calico-typha-b54fd48f5-wgvl5\" (UID: \"1a7c43d3-3cc2-4713-b702-df624f41af06\") " pod="calico-system/calico-typha-b54fd48f5-wgvl5" Jan 30 13:55:21.155695 kubelet[3109]: I0130 13:55:21.155582 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-cni-net-dir\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155832 kubelet[3109]: I0130 13:55:21.155593 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b153ff53-b790-4ffe-82ac-a800a8f52eef-registration-dir\") pod \"csi-node-driver-jnwwl\" (UID: \"b153ff53-b790-4ffe-82ac-a800a8f52eef\") " pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:21.155832 kubelet[3109]: I0130 13:55:21.155602 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/44965af7-38b6-4220-ba83-b88151efb3db-cni-bin-dir\") pod \"calico-node-q69b5\" (UID: \"44965af7-38b6-4220-ba83-b88151efb3db\") " pod="calico-system/calico-node-q69b5" Jan 30 13:55:21.155832 kubelet[3109]: I0130 13:55:21.155611 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b153ff53-b790-4ffe-82ac-a800a8f52eef-varrun\") pod \"csi-node-driver-jnwwl\" (UID: \"b153ff53-b790-4ffe-82ac-a800a8f52eef\") " pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:21.257954 kubelet[3109]: E0130 13:55:21.257876 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.257954 kubelet[3109]: W0130 13:55:21.257940 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.258500 kubelet[3109]: E0130 13:55:21.258029 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.258770 kubelet[3109]: E0130 13:55:21.258747 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.258770 kubelet[3109]: W0130 13:55:21.258765 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.258926 kubelet[3109]: E0130 13:55:21.258796 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.259063 kubelet[3109]: E0130 13:55:21.259020 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.259063 kubelet[3109]: W0130 13:55:21.259033 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.259182 kubelet[3109]: E0130 13:55:21.259068 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.259307 kubelet[3109]: E0130 13:55:21.259272 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.259307 kubelet[3109]: W0130 13:55:21.259280 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.259307 kubelet[3109]: E0130 13:55:21.259298 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.259496 kubelet[3109]: E0130 13:55:21.259422 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.259496 kubelet[3109]: W0130 13:55:21.259436 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.259496 kubelet[3109]: E0130 13:55:21.259454 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.259603 kubelet[3109]: E0130 13:55:21.259597 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.259631 kubelet[3109]: W0130 13:55:21.259605 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.259660 kubelet[3109]: E0130 13:55:21.259626 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.259804 kubelet[3109]: E0130 13:55:21.259793 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.259834 kubelet[3109]: W0130 13:55:21.259803 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.259834 kubelet[3109]: E0130 13:55:21.259824 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.259951 kubelet[3109]: E0130 13:55:21.259944 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.259951 kubelet[3109]: W0130 13:55:21.259951 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260010 kubelet[3109]: E0130 13:55:21.259970 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.260074 kubelet[3109]: E0130 13:55:21.260065 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.260074 kubelet[3109]: W0130 13:55:21.260071 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260161 kubelet[3109]: E0130 13:55:21.260089 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.260211 kubelet[3109]: E0130 13:55:21.260202 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.260211 kubelet[3109]: W0130 13:55:21.260209 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260292 kubelet[3109]: E0130 13:55:21.260227 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.260343 kubelet[3109]: E0130 13:55:21.260336 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.260378 kubelet[3109]: W0130 13:55:21.260343 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260378 kubelet[3109]: E0130 13:55:21.260363 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.260466 kubelet[3109]: E0130 13:55:21.260459 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.260466 kubelet[3109]: W0130 13:55:21.260465 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260533 kubelet[3109]: E0130 13:55:21.260482 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.260618 kubelet[3109]: E0130 13:55:21.260611 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.260618 kubelet[3109]: W0130 13:55:21.260617 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260663 kubelet[3109]: E0130 13:55:21.260649 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.260805 kubelet[3109]: E0130 13:55:21.260799 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.260836 kubelet[3109]: W0130 13:55:21.260805 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260836 kubelet[3109]: E0130 13:55:21.260823 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.260956 kubelet[3109]: E0130 13:55:21.260950 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.260984 kubelet[3109]: W0130 13:55:21.260957 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.260984 kubelet[3109]: E0130 13:55:21.260973 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261056 kubelet[3109]: E0130 13:55:21.261049 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261081 kubelet[3109]: W0130 13:55:21.261055 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261081 kubelet[3109]: E0130 13:55:21.261063 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261160 kubelet[3109]: E0130 13:55:21.261154 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261184 kubelet[3109]: W0130 13:55:21.261160 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261184 kubelet[3109]: E0130 13:55:21.261168 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261304 kubelet[3109]: E0130 13:55:21.261297 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261304 kubelet[3109]: W0130 13:55:21.261303 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261354 kubelet[3109]: E0130 13:55:21.261311 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261415 kubelet[3109]: E0130 13:55:21.261409 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261449 kubelet[3109]: W0130 13:55:21.261415 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261449 kubelet[3109]: E0130 13:55:21.261426 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261545 kubelet[3109]: E0130 13:55:21.261538 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261571 kubelet[3109]: W0130 13:55:21.261545 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261571 kubelet[3109]: E0130 13:55:21.261552 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261665 kubelet[3109]: E0130 13:55:21.261658 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261690 kubelet[3109]: W0130 13:55:21.261664 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261690 kubelet[3109]: E0130 13:55:21.261672 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261786 kubelet[3109]: E0130 13:55:21.261780 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261811 kubelet[3109]: W0130 13:55:21.261786 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261811 kubelet[3109]: E0130 13:55:21.261794 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.261922 kubelet[3109]: E0130 13:55:21.261916 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.261947 kubelet[3109]: W0130 13:55:21.261922 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.261947 kubelet[3109]: E0130 13:55:21.261928 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.264823 kubelet[3109]: E0130 13:55:21.264810 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.264823 kubelet[3109]: W0130 13:55:21.264821 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.264920 kubelet[3109]: E0130 13:55:21.264837 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.265012 kubelet[3109]: E0130 13:55:21.265003 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.265012 kubelet[3109]: W0130 13:55:21.265011 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.265072 kubelet[3109]: E0130 13:55:21.265022 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.267593 kubelet[3109]: E0130 13:55:21.267567 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:21.267593 kubelet[3109]: W0130 13:55:21.267577 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:21.267593 kubelet[3109]: E0130 13:55:21.267586 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:21.390849 containerd[1796]: time="2025-01-30T13:55:21.390600216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b54fd48f5-wgvl5,Uid:1a7c43d3-3cc2-4713-b702-df624f41af06,Namespace:calico-system,Attempt:0,}" Jan 30 13:55:21.395059 containerd[1796]: time="2025-01-30T13:55:21.395019281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q69b5,Uid:44965af7-38b6-4220-ba83-b88151efb3db,Namespace:calico-system,Attempt:0,}" Jan 30 13:55:21.402103 containerd[1796]: time="2025-01-30T13:55:21.402061397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:21.402103 containerd[1796]: time="2025-01-30T13:55:21.402092592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:21.402103 containerd[1796]: time="2025-01-30T13:55:21.402099752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:21.402209 containerd[1796]: time="2025-01-30T13:55:21.402141699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:21.404289 containerd[1796]: time="2025-01-30T13:55:21.404252898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:21.404289 containerd[1796]: time="2025-01-30T13:55:21.404280135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:21.404360 containerd[1796]: time="2025-01-30T13:55:21.404290492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:21.404516 containerd[1796]: time="2025-01-30T13:55:21.404502911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:21.422691 systemd[1]: Started cri-containerd-0aa4fc2e0609661d7b9904e0b5480417ae8a03069ca54fc74b6d1a7b47281352.scope - libcontainer container 0aa4fc2e0609661d7b9904e0b5480417ae8a03069ca54fc74b6d1a7b47281352. Jan 30 13:55:21.424321 systemd[1]: Started cri-containerd-b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064.scope - libcontainer container b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064. Jan 30 13:55:21.436374 containerd[1796]: time="2025-01-30T13:55:21.436350499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q69b5,Uid:44965af7-38b6-4220-ba83-b88151efb3db,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064\"" Jan 30 13:55:21.437247 containerd[1796]: time="2025-01-30T13:55:21.437203610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:55:21.447809 containerd[1796]: time="2025-01-30T13:55:21.447788441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b54fd48f5-wgvl5,Uid:1a7c43d3-3cc2-4713-b702-df624f41af06,Namespace:calico-system,Attempt:0,} returns sandbox id \"0aa4fc2e0609661d7b9904e0b5480417ae8a03069ca54fc74b6d1a7b47281352\"" Jan 30 13:55:22.814546 update_engine[1791]: I20250130 13:55:22.814473 1791 update_attempter.cc:509] Updating boot flags... Jan 30 13:55:22.841512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233199248.mount: Deactivated successfully. Jan 30 13:55:22.846430 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (3714) Jan 30 13:55:22.862885 kubelet[3109]: E0130 13:55:22.847647 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:22.862885 kubelet[3109]: W0130 13:55:22.847661 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:22.862885 kubelet[3109]: E0130 13:55:22.847671 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:22.862885 kubelet[3109]: E0130 13:55:22.847760 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:22.862885 kubelet[3109]: W0130 13:55:22.847765 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:22.862885 kubelet[3109]: E0130 13:55:22.847770 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:22.862885 kubelet[3109]: E0130 13:55:22.847847 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:22.862885 kubelet[3109]: W0130 13:55:22.847852 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:22.862885 kubelet[3109]: E0130 13:55:22.847859 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:22.862885 kubelet[3109]: E0130 13:55:22.847969 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:22.863199 kubelet[3109]: W0130 13:55:22.847973 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:22.863199 kubelet[3109]: E0130 13:55:22.847978 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:22.863199 kubelet[3109]: E0130 13:55:22.848053 3109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:55:22.863199 kubelet[3109]: W0130 13:55:22.848058 3109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:55:22.863199 kubelet[3109]: E0130 13:55:22.848065 3109 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:55:22.905437 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (3714) Jan 30 13:55:22.932433 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (3714) Jan 30 13:55:22.933931 containerd[1796]: time="2025-01-30T13:55:22.933909066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:22.934146 containerd[1796]: time="2025-01-30T13:55:22.934085747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:55:22.934478 containerd[1796]: time="2025-01-30T13:55:22.934461572Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:22.935368 containerd[1796]: time="2025-01-30T13:55:22.935352827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:22.935839 containerd[1796]: time="2025-01-30T13:55:22.935824606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.498604377s" Jan 30 13:55:22.935887 containerd[1796]: time="2025-01-30T13:55:22.935839783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:55:22.936340 containerd[1796]: time="2025-01-30T13:55:22.936327312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:55:22.936921 containerd[1796]: time="2025-01-30T13:55:22.936909581Z" level=info msg="CreateContainer within sandbox \"b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:55:22.942334 containerd[1796]: time="2025-01-30T13:55:22.942313081Z" level=info msg="CreateContainer within sandbox \"b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521\"" Jan 30 13:55:22.942595 containerd[1796]: time="2025-01-30T13:55:22.942579093Z" level=info msg="StartContainer for \"6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521\"" Jan 30 13:55:22.965748 systemd[1]: Started cri-containerd-6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521.scope - libcontainer container 6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521. Jan 30 13:55:22.978910 containerd[1796]: time="2025-01-30T13:55:22.978887282Z" level=info msg="StartContainer for \"6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521\" returns successfully" Jan 30 13:55:22.983849 systemd[1]: cri-containerd-6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521.scope: Deactivated successfully. Jan 30 13:55:23.174887 kubelet[3109]: E0130 13:55:23.174670 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:23.226682 containerd[1796]: time="2025-01-30T13:55:23.226640978Z" level=info msg="shim disconnected" id=6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521 namespace=k8s.io Jan 30 13:55:23.226682 containerd[1796]: time="2025-01-30T13:55:23.226678890Z" level=warning msg="cleaning up after shim disconnected" id=6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521 namespace=k8s.io Jan 30 13:55:23.226812 containerd[1796]: time="2025-01-30T13:55:23.226688892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:55:23.265606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f534bc51e3a8178cd9ea2e6e76ce236d188e563b251ec63f1109403de2fc521-rootfs.mount: Deactivated successfully. Jan 30 13:55:25.045214 containerd[1796]: time="2025-01-30T13:55:25.045186232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:25.045449 containerd[1796]: time="2025-01-30T13:55:25.045399642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:55:25.045736 containerd[1796]: time="2025-01-30T13:55:25.045726024Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:25.046649 containerd[1796]: time="2025-01-30T13:55:25.046637819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:25.047044 containerd[1796]: time="2025-01-30T13:55:25.047034578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.110688817s" Jan 30 13:55:25.047064 containerd[1796]: time="2025-01-30T13:55:25.047048596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:55:25.047553 containerd[1796]: time="2025-01-30T13:55:25.047543269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:55:25.050384 containerd[1796]: time="2025-01-30T13:55:25.050366781Z" level=info msg="CreateContainer within sandbox \"0aa4fc2e0609661d7b9904e0b5480417ae8a03069ca54fc74b6d1a7b47281352\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:55:25.055227 containerd[1796]: time="2025-01-30T13:55:25.055213367Z" level=info msg="CreateContainer within sandbox \"0aa4fc2e0609661d7b9904e0b5480417ae8a03069ca54fc74b6d1a7b47281352\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4cc6540535ef2719239b969c9a42523bd09c608929782c5c24a85a91992d3ae8\"" Jan 30 13:55:25.055409 containerd[1796]: time="2025-01-30T13:55:25.055395610Z" level=info msg="StartContainer for \"4cc6540535ef2719239b969c9a42523bd09c608929782c5c24a85a91992d3ae8\"" Jan 30 13:55:25.074713 systemd[1]: Started cri-containerd-4cc6540535ef2719239b969c9a42523bd09c608929782c5c24a85a91992d3ae8.scope - libcontainer container 4cc6540535ef2719239b969c9a42523bd09c608929782c5c24a85a91992d3ae8. Jan 30 13:55:25.098380 containerd[1796]: time="2025-01-30T13:55:25.098359237Z" level=info msg="StartContainer for \"4cc6540535ef2719239b969c9a42523bd09c608929782c5c24a85a91992d3ae8\" returns successfully" Jan 30 13:55:25.175277 kubelet[3109]: E0130 13:55:25.175189 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:25.234577 kubelet[3109]: I0130 13:55:25.234532 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b54fd48f5-wgvl5" podStartSLOduration=0.635333805 podStartE2EDuration="4.234513261s" podCreationTimestamp="2025-01-30 13:55:21 +0000 UTC" firstStartedPulling="2025-01-30 13:55:21.44831224 +0000 UTC m=+12.321492188" lastFinishedPulling="2025-01-30 13:55:25.047491698 +0000 UTC m=+15.920671644" observedRunningTime="2025-01-30 13:55:25.234259783 +0000 UTC m=+16.107439741" watchObservedRunningTime="2025-01-30 13:55:25.234513261 +0000 UTC m=+16.107693210" Jan 30 13:55:27.174991 kubelet[3109]: E0130 13:55:27.174951 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:28.820583 containerd[1796]: time="2025-01-30T13:55:28.820560418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:28.820801 containerd[1796]: time="2025-01-30T13:55:28.820733445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:55:28.821015 containerd[1796]: time="2025-01-30T13:55:28.821003534Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:28.822113 containerd[1796]: time="2025-01-30T13:55:28.822101943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:28.822569 containerd[1796]: time="2025-01-30T13:55:28.822557210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.775000005s" Jan 30 13:55:28.822593 containerd[1796]: time="2025-01-30T13:55:28.822573395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:55:28.823488 containerd[1796]: time="2025-01-30T13:55:28.823477060Z" level=info msg="CreateContainer within sandbox \"b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:55:28.827901 containerd[1796]: time="2025-01-30T13:55:28.827886254Z" level=info msg="CreateContainer within sandbox \"b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71\"" Jan 30 13:55:28.828124 containerd[1796]: time="2025-01-30T13:55:28.828111590Z" level=info msg="StartContainer for \"e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71\"" Jan 30 13:55:28.855735 systemd[1]: Started cri-containerd-e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71.scope - libcontainer container e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71. Jan 30 13:55:28.869218 containerd[1796]: time="2025-01-30T13:55:28.869196040Z" level=info msg="StartContainer for \"e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71\" returns successfully" Jan 30 13:55:29.174439 kubelet[3109]: E0130 13:55:29.174373 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:29.406306 systemd[1]: cri-containerd-e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71.scope: Deactivated successfully. Jan 30 13:55:29.416176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71-rootfs.mount: Deactivated successfully. Jan 30 13:55:29.494828 kubelet[3109]: I0130 13:55:29.494712 3109 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:55:29.555398 systemd[1]: Created slice kubepods-burstable-podf6858660_a650_44b5_8920_2ec81bb1b138.slice - libcontainer container kubepods-burstable-podf6858660_a650_44b5_8920_2ec81bb1b138.slice. Jan 30 13:55:29.569496 systemd[1]: Created slice kubepods-burstable-pod07922a7a_83b4_4d16_85d7_30bdc2b6b793.slice - libcontainer container kubepods-burstable-pod07922a7a_83b4_4d16_85d7_30bdc2b6b793.slice. Jan 30 13:55:29.579829 systemd[1]: Created slice kubepods-besteffort-pod8e5acc49_b227_4ac4_a04e_929de29daecb.slice - libcontainer container kubepods-besteffort-pod8e5acc49_b227_4ac4_a04e_929de29daecb.slice. Jan 30 13:55:29.586999 systemd[1]: Created slice kubepods-besteffort-pod111d62eb_6a22_4903_83b5_b0f05dac736f.slice - libcontainer container kubepods-besteffort-pod111d62eb_6a22_4903_83b5_b0f05dac736f.slice. Jan 30 13:55:29.590244 systemd[1]: Created slice kubepods-besteffort-pod378b5e35_c026_4506_a582_fb431d551682.slice - libcontainer container kubepods-besteffort-pod378b5e35_c026_4506_a582_fb431d551682.slice. Jan 30 13:55:29.616074 kubelet[3109]: I0130 13:55:29.615966 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6km\" (UniqueName: \"kubernetes.io/projected/111d62eb-6a22-4903-83b5-b0f05dac736f-kube-api-access-rv6km\") pod \"calico-apiserver-57746584d-z55mr\" (UID: \"111d62eb-6a22-4903-83b5-b0f05dac736f\") " pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:29.616392 kubelet[3109]: I0130 13:55:29.616099 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07922a7a-83b4-4d16-85d7-30bdc2b6b793-config-volume\") pod \"coredns-668d6bf9bc-2f94w\" (UID: \"07922a7a-83b4-4d16-85d7-30bdc2b6b793\") " pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:29.616392 kubelet[3109]: I0130 13:55:29.616158 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6858660-a650-44b5-8920-2ec81bb1b138-config-volume\") pod \"coredns-668d6bf9bc-k9ngp\" (UID: \"f6858660-a650-44b5-8920-2ec81bb1b138\") " pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:29.616392 kubelet[3109]: I0130 13:55:29.616207 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlq47\" (UniqueName: \"kubernetes.io/projected/378b5e35-c026-4506-a582-fb431d551682-kube-api-access-wlq47\") pod \"calico-apiserver-57746584d-r5fdn\" (UID: \"378b5e35-c026-4506-a582-fb431d551682\") " pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:29.616392 kubelet[3109]: I0130 13:55:29.616252 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsskw\" (UniqueName: \"kubernetes.io/projected/07922a7a-83b4-4d16-85d7-30bdc2b6b793-kube-api-access-lsskw\") pod \"coredns-668d6bf9bc-2f94w\" (UID: \"07922a7a-83b4-4d16-85d7-30bdc2b6b793\") " pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:29.616392 kubelet[3109]: I0130 13:55:29.616295 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg556\" (UniqueName: \"kubernetes.io/projected/f6858660-a650-44b5-8920-2ec81bb1b138-kube-api-access-pg556\") pod \"coredns-668d6bf9bc-k9ngp\" (UID: \"f6858660-a650-44b5-8920-2ec81bb1b138\") " pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:29.616868 kubelet[3109]: I0130 13:55:29.616344 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfvrn\" (UniqueName: \"kubernetes.io/projected/8e5acc49-b227-4ac4-a04e-929de29daecb-kube-api-access-nfvrn\") pod \"calico-kube-controllers-57994df9cf-cnldt\" (UID: \"8e5acc49-b227-4ac4-a04e-929de29daecb\") " pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:29.616868 kubelet[3109]: I0130 13:55:29.616390 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/111d62eb-6a22-4903-83b5-b0f05dac736f-calico-apiserver-certs\") pod \"calico-apiserver-57746584d-z55mr\" (UID: \"111d62eb-6a22-4903-83b5-b0f05dac736f\") " pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:29.616868 kubelet[3109]: I0130 13:55:29.616500 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e5acc49-b227-4ac4-a04e-929de29daecb-tigera-ca-bundle\") pod \"calico-kube-controllers-57994df9cf-cnldt\" (UID: \"8e5acc49-b227-4ac4-a04e-929de29daecb\") " pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:29.616868 kubelet[3109]: I0130 13:55:29.616610 3109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/378b5e35-c026-4506-a582-fb431d551682-calico-apiserver-certs\") pod \"calico-apiserver-57746584d-r5fdn\" (UID: \"378b5e35-c026-4506-a582-fb431d551682\") " pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:29.862942 containerd[1796]: time="2025-01-30T13:55:29.862800158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:29.885685 containerd[1796]: time="2025-01-30T13:55:29.885649557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:29.885830 containerd[1796]: time="2025-01-30T13:55:29.885650414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:0,}" Jan 30 13:55:29.889814 containerd[1796]: time="2025-01-30T13:55:29.889748291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:55:29.892938 containerd[1796]: time="2025-01-30T13:55:29.892857300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:55:30.078978 containerd[1796]: time="2025-01-30T13:55:30.078917196Z" level=info msg="shim disconnected" id=e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71 namespace=k8s.io Jan 30 13:55:30.078978 containerd[1796]: time="2025-01-30T13:55:30.078974956Z" level=warning msg="cleaning up after shim disconnected" id=e200d3350824f3008560ec9e050ce85f30958b98840cad0fc98adcdd782bde71 namespace=k8s.io Jan 30 13:55:30.078978 containerd[1796]: time="2025-01-30T13:55:30.078998521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:55:30.120052 containerd[1796]: time="2025-01-30T13:55:30.119972461Z" level=error msg="Failed to destroy network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120052 containerd[1796]: time="2025-01-30T13:55:30.120027285Z" level=error msg="Failed to destroy network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120243 containerd[1796]: time="2025-01-30T13:55:30.120222374Z" level=error msg="Failed to destroy network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120271 containerd[1796]: time="2025-01-30T13:55:30.120239563Z" level=error msg="encountered an error cleaning up failed sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120300 containerd[1796]: time="2025-01-30T13:55:30.120284351Z" level=error msg="encountered an error cleaning up failed sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120339 containerd[1796]: time="2025-01-30T13:55:30.120324332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120372 containerd[1796]: time="2025-01-30T13:55:30.120290242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120453 containerd[1796]: time="2025-01-30T13:55:30.120436599Z" level=error msg="encountered an error cleaning up failed sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120502 containerd[1796]: time="2025-01-30T13:55:30.120461533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120555 kubelet[3109]: E0130 13:55:30.120525 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120586 kubelet[3109]: E0130 13:55:30.120548 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120586 kubelet[3109]: E0130 13:55:30.120525 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.120638 kubelet[3109]: E0130 13:55:30.120593 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:30.120638 kubelet[3109]: E0130 13:55:30.120612 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:30.120692 kubelet[3109]: E0130 13:55:30.120593 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:30.120692 kubelet[3109]: E0130 13:55:30.120644 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" podUID="378b5e35-c026-4506-a582-fb431d551682" Jan 30 13:55:30.120692 kubelet[3109]: E0130 13:55:30.120651 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:30.120762 kubelet[3109]: E0130 13:55:30.120655 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:30.120762 kubelet[3109]: E0130 13:55:30.120667 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:30.120762 kubelet[3109]: E0130 13:55:30.120686 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" podUID="111d62eb-6a22-4903-83b5-b0f05dac736f" Jan 30 13:55:30.120830 kubelet[3109]: E0130 13:55:30.120687 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2f94w" podUID="07922a7a-83b4-4d16-85d7-30bdc2b6b793" Jan 30 13:55:30.121210 containerd[1796]: time="2025-01-30T13:55:30.121191637Z" level=error msg="Failed to destroy network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121303 containerd[1796]: time="2025-01-30T13:55:30.121290979Z" level=error msg="Failed to destroy network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121398 containerd[1796]: time="2025-01-30T13:55:30.121385501Z" level=error msg="encountered an error cleaning up failed sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121421 containerd[1796]: time="2025-01-30T13:55:30.121410720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121465 containerd[1796]: time="2025-01-30T13:55:30.121431550Z" level=error msg="encountered an error cleaning up failed sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121465 containerd[1796]: time="2025-01-30T13:55:30.121456350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121516 kubelet[3109]: E0130 13:55:30.121486 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121516 kubelet[3109]: E0130 13:55:30.121510 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:30.121576 kubelet[3109]: E0130 13:55:30.121514 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.121576 kubelet[3109]: E0130 13:55:30.121521 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:30.121576 kubelet[3109]: E0130 13:55:30.121536 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:30.121576 kubelet[3109]: E0130 13:55:30.121546 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:30.121643 kubelet[3109]: E0130 13:55:30.121540 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" podUID="8e5acc49-b227-4ac4-a04e-929de29daecb" Jan 30 13:55:30.121643 kubelet[3109]: E0130 13:55:30.121576 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k9ngp" podUID="f6858660-a650-44b5-8920-2ec81bb1b138" Jan 30 13:55:30.121681 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6-shm.mount: Deactivated successfully. Jan 30 13:55:30.121738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d-shm.mount: Deactivated successfully. Jan 30 13:55:30.121776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259-shm.mount: Deactivated successfully. Jan 30 13:55:30.234409 kubelet[3109]: I0130 13:55:30.234387 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6" Jan 30 13:55:30.234855 containerd[1796]: time="2025-01-30T13:55:30.234830913Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:55:30.235024 containerd[1796]: time="2025-01-30T13:55:30.235007811Z" level=info msg="Ensure that sandbox 8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6 in task-service has been cleanup successfully" Jan 30 13:55:30.235092 kubelet[3109]: I0130 13:55:30.235029 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259" Jan 30 13:55:30.235162 containerd[1796]: time="2025-01-30T13:55:30.235144373Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:55:30.235222 containerd[1796]: time="2025-01-30T13:55:30.235161645Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:55:30.235367 containerd[1796]: time="2025-01-30T13:55:30.235351351Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:55:30.235518 containerd[1796]: time="2025-01-30T13:55:30.235497318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:55:30.235595 containerd[1796]: time="2025-01-30T13:55:30.235577636Z" level=info msg="Ensure that sandbox 3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259 in task-service has been cleanup successfully" Jan 30 13:55:30.235754 containerd[1796]: time="2025-01-30T13:55:30.235736043Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:55:30.235802 containerd[1796]: time="2025-01-30T13:55:30.235754316Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:55:30.236038 containerd[1796]: time="2025-01-30T13:55:30.236020131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:1,}" Jan 30 13:55:30.236614 kubelet[3109]: I0130 13:55:30.236601 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d" Jan 30 13:55:30.236649 containerd[1796]: time="2025-01-30T13:55:30.236624144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:55:30.236848 containerd[1796]: time="2025-01-30T13:55:30.236838242Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:55:30.236958 containerd[1796]: time="2025-01-30T13:55:30.236946243Z" level=info msg="Ensure that sandbox e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d in task-service has been cleanup successfully" Jan 30 13:55:30.237042 kubelet[3109]: I0130 13:55:30.237033 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5" Jan 30 13:55:30.237079 containerd[1796]: time="2025-01-30T13:55:30.237068300Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:55:30.237106 containerd[1796]: time="2025-01-30T13:55:30.237080989Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:55:30.237231 containerd[1796]: time="2025-01-30T13:55:30.237222404Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:55:30.237297 containerd[1796]: time="2025-01-30T13:55:30.237280219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:55:30.237335 containerd[1796]: time="2025-01-30T13:55:30.237319471Z" level=info msg="Ensure that sandbox a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5 in task-service has been cleanup successfully" Jan 30 13:55:30.237444 containerd[1796]: time="2025-01-30T13:55:30.237405008Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:55:30.237444 containerd[1796]: time="2025-01-30T13:55:30.237413001Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:55:30.237541 kubelet[3109]: I0130 13:55:30.237516 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5" Jan 30 13:55:30.237609 containerd[1796]: time="2025-01-30T13:55:30.237597173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:1,}" Jan 30 13:55:30.237712 containerd[1796]: time="2025-01-30T13:55:30.237698196Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:55:30.237805 containerd[1796]: time="2025-01-30T13:55:30.237795551Z" level=info msg="Ensure that sandbox 095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5 in task-service has been cleanup successfully" Jan 30 13:55:30.237884 containerd[1796]: time="2025-01-30T13:55:30.237873812Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:55:30.237884 containerd[1796]: time="2025-01-30T13:55:30.237883933Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:55:30.238083 containerd[1796]: time="2025-01-30T13:55:30.238071313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:1,}" Jan 30 13:55:30.283664 containerd[1796]: time="2025-01-30T13:55:30.283636783Z" level=error msg="Failed to destroy network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.283887 containerd[1796]: time="2025-01-30T13:55:30.283874958Z" level=error msg="encountered an error cleaning up failed sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.283921 containerd[1796]: time="2025-01-30T13:55:30.283911913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284021 containerd[1796]: time="2025-01-30T13:55:30.283977040Z" level=error msg="Failed to destroy network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284069 containerd[1796]: time="2025-01-30T13:55:30.284032928Z" level=error msg="Failed to destroy network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284098 kubelet[3109]: E0130 13:55:30.284047 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284098 kubelet[3109]: E0130 13:55:30.284084 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:30.284146 kubelet[3109]: E0130 13:55:30.284107 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:30.284163 kubelet[3109]: E0130 13:55:30.284134 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" podUID="111d62eb-6a22-4903-83b5-b0f05dac736f" Jan 30 13:55:30.284202 containerd[1796]: time="2025-01-30T13:55:30.284173754Z" level=error msg="encountered an error cleaning up failed sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284228 containerd[1796]: time="2025-01-30T13:55:30.284214232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284263 containerd[1796]: time="2025-01-30T13:55:30.284246149Z" level=error msg="encountered an error cleaning up failed sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284294 containerd[1796]: time="2025-01-30T13:55:30.284279757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284344 kubelet[3109]: E0130 13:55:30.284288 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284344 kubelet[3109]: E0130 13:55:30.284314 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:30.284344 kubelet[3109]: E0130 13:55:30.284328 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:30.284403 kubelet[3109]: E0130 13:55:30.284354 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2f94w" podUID="07922a7a-83b4-4d16-85d7-30bdc2b6b793" Jan 30 13:55:30.284403 kubelet[3109]: E0130 13:55:30.284358 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284403 kubelet[3109]: E0130 13:55:30.284386 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:30.284483 kubelet[3109]: E0130 13:55:30.284395 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:30.284483 kubelet[3109]: E0130 13:55:30.284410 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k9ngp" podUID="f6858660-a650-44b5-8920-2ec81bb1b138" Jan 30 13:55:30.284645 containerd[1796]: time="2025-01-30T13:55:30.284625649Z" level=error msg="Failed to destroy network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284779 containerd[1796]: time="2025-01-30T13:55:30.284767706Z" level=error msg="encountered an error cleaning up failed sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284799 containerd[1796]: time="2025-01-30T13:55:30.284790769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284854 kubelet[3109]: E0130 13:55:30.284844 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.284875 kubelet[3109]: E0130 13:55:30.284861 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:30.284875 kubelet[3109]: E0130 13:55:30.284870 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:30.284918 kubelet[3109]: E0130 13:55:30.284886 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" podUID="378b5e35-c026-4506-a582-fb431d551682" Jan 30 13:55:30.287735 containerd[1796]: time="2025-01-30T13:55:30.287713567Z" level=error msg="Failed to destroy network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.287880 containerd[1796]: time="2025-01-30T13:55:30.287866995Z" level=error msg="encountered an error cleaning up failed sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.287914 containerd[1796]: time="2025-01-30T13:55:30.287897653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.288005 kubelet[3109]: E0130 13:55:30.287991 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:30.288038 kubelet[3109]: E0130 13:55:30.288014 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:30.288038 kubelet[3109]: E0130 13:55:30.288026 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:30.288094 kubelet[3109]: E0130 13:55:30.288061 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" podUID="8e5acc49-b227-4ac4-a04e-929de29daecb" Jan 30 13:55:30.829861 systemd[1]: run-netns-cni\x2d370b89f0\x2d6722\x2df36c\x2d146e\x2d9cef8dbe4f49.mount: Deactivated successfully. Jan 30 13:55:30.829933 systemd[1]: run-netns-cni\x2db1db20dc\x2d4ea6\x2de759\x2d18fb\x2d9c5254f9aff8.mount: Deactivated successfully. Jan 30 13:55:30.829984 systemd[1]: run-netns-cni\x2d855aab64\x2d4e3d\x2d14d0\x2d493b\x2dcc5a1ad4e7f3.mount: Deactivated successfully. Jan 30 13:55:30.830032 systemd[1]: run-netns-cni\x2df521da38\x2d535d\x2db1e6\x2d73e1\x2df3751d0d9f74.mount: Deactivated successfully. Jan 30 13:55:30.830080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5-shm.mount: Deactivated successfully. Jan 30 13:55:30.830135 systemd[1]: run-netns-cni\x2dc41634b7\x2d8385\x2d9498\x2d8582\x2d21ca102647ee.mount: Deactivated successfully. Jan 30 13:55:30.830188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5-shm.mount: Deactivated successfully. Jan 30 13:55:31.178343 systemd[1]: Created slice kubepods-besteffort-podb153ff53_b790_4ffe_82ac_a800a8f52eef.slice - libcontainer container kubepods-besteffort-podb153ff53_b790_4ffe_82ac_a800a8f52eef.slice. Jan 30 13:55:31.179489 containerd[1796]: time="2025-01-30T13:55:31.179469095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:0,}" Jan 30 13:55:31.210204 containerd[1796]: time="2025-01-30T13:55:31.210145854Z" level=error msg="Failed to destroy network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.210366 containerd[1796]: time="2025-01-30T13:55:31.210352211Z" level=error msg="encountered an error cleaning up failed sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.210403 containerd[1796]: time="2025-01-30T13:55:31.210391905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.210587 kubelet[3109]: E0130 13:55:31.210534 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.210587 kubelet[3109]: E0130 13:55:31.210575 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:31.210655 kubelet[3109]: E0130 13:55:31.210591 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:31.210655 kubelet[3109]: E0130 13:55:31.210621 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:31.211847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4-shm.mount: Deactivated successfully. Jan 30 13:55:31.242991 kubelet[3109]: I0130 13:55:31.242943 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22" Jan 30 13:55:31.244047 containerd[1796]: time="2025-01-30T13:55:31.243967159Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:55:31.244764 containerd[1796]: time="2025-01-30T13:55:31.244708969Z" level=info msg="Ensure that sandbox 803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22 in task-service has been cleanup successfully" Jan 30 13:55:31.245373 containerd[1796]: time="2025-01-30T13:55:31.245282712Z" level=info msg="TearDown network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" successfully" Jan 30 13:55:31.245373 containerd[1796]: time="2025-01-30T13:55:31.245348334Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" returns successfully" Jan 30 13:55:31.246124 kubelet[3109]: I0130 13:55:31.246033 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65" Jan 30 13:55:31.246357 containerd[1796]: time="2025-01-30T13:55:31.246117020Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:55:31.246542 containerd[1796]: time="2025-01-30T13:55:31.246404940Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:55:31.246649 containerd[1796]: time="2025-01-30T13:55:31.246556224Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:55:31.247213 containerd[1796]: time="2025-01-30T13:55:31.247109054Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:55:31.247577 containerd[1796]: time="2025-01-30T13:55:31.247459177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:55:31.247768 containerd[1796]: time="2025-01-30T13:55:31.247686894Z" level=info msg="Ensure that sandbox 79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65 in task-service has been cleanup successfully" Jan 30 13:55:31.248227 containerd[1796]: time="2025-01-30T13:55:31.248132287Z" level=info msg="TearDown network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" successfully" Jan 30 13:55:31.248227 containerd[1796]: time="2025-01-30T13:55:31.248191418Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" returns successfully" Jan 30 13:55:31.248808 containerd[1796]: time="2025-01-30T13:55:31.248776984Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:55:31.248904 containerd[1796]: time="2025-01-30T13:55:31.248889329Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:55:31.248957 containerd[1796]: time="2025-01-30T13:55:31.248903601Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:55:31.249037 kubelet[3109]: I0130 13:55:31.249028 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3" Jan 30 13:55:31.249148 containerd[1796]: time="2025-01-30T13:55:31.249135707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:2,}" Jan 30 13:55:31.249400 containerd[1796]: time="2025-01-30T13:55:31.249385639Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:55:31.249570 containerd[1796]: time="2025-01-30T13:55:31.249555107Z" level=info msg="Ensure that sandbox 987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3 in task-service has been cleanup successfully" Jan 30 13:55:31.249973 systemd[1]: run-netns-cni\x2de60440e1\x2dfd30\x2d4362\x2d31dd\x2dc29f6f526f5d.mount: Deactivated successfully. Jan 30 13:55:31.250243 containerd[1796]: time="2025-01-30T13:55:31.250231873Z" level=info msg="TearDown network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" successfully" Jan 30 13:55:31.250243 containerd[1796]: time="2025-01-30T13:55:31.250242205Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" returns successfully" Jan 30 13:55:31.250575 kubelet[3109]: I0130 13:55:31.250560 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626" Jan 30 13:55:31.250620 containerd[1796]: time="2025-01-30T13:55:31.250570415Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:55:31.250660 containerd[1796]: time="2025-01-30T13:55:31.250630087Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:55:31.250688 containerd[1796]: time="2025-01-30T13:55:31.250660490Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:55:31.250874 containerd[1796]: time="2025-01-30T13:55:31.250860312Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:55:31.250908 containerd[1796]: time="2025-01-30T13:55:31.250897790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:2,}" Jan 30 13:55:31.251007 containerd[1796]: time="2025-01-30T13:55:31.250996043Z" level=info msg="Ensure that sandbox 36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626 in task-service has been cleanup successfully" Jan 30 13:55:31.251069 kubelet[3109]: I0130 13:55:31.251060 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4" Jan 30 13:55:31.251101 containerd[1796]: time="2025-01-30T13:55:31.251089572Z" level=info msg="TearDown network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" successfully" Jan 30 13:55:31.251126 containerd[1796]: time="2025-01-30T13:55:31.251100321Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" returns successfully" Jan 30 13:55:31.251240 containerd[1796]: time="2025-01-30T13:55:31.251229832Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:55:31.251287 containerd[1796]: time="2025-01-30T13:55:31.251277634Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:55:31.251308 containerd[1796]: time="2025-01-30T13:55:31.251287369Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:55:31.251323 containerd[1796]: time="2025-01-30T13:55:31.251307876Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:55:31.251415 containerd[1796]: time="2025-01-30T13:55:31.251406533Z" level=info msg="Ensure that sandbox 58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4 in task-service has been cleanup successfully" Jan 30 13:55:31.251490 containerd[1796]: time="2025-01-30T13:55:31.251481333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:2,}" Jan 30 13:55:31.251557 containerd[1796]: time="2025-01-30T13:55:31.251483208Z" level=info msg="TearDown network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" successfully" Jan 30 13:55:31.251581 containerd[1796]: time="2025-01-30T13:55:31.251556535Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" returns successfully" Jan 30 13:55:31.251646 kubelet[3109]: I0130 13:55:31.251635 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a" Jan 30 13:55:31.251769 containerd[1796]: time="2025-01-30T13:55:31.251756136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:1,}" Jan 30 13:55:31.251874 containerd[1796]: time="2025-01-30T13:55:31.251864932Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:55:31.251958 containerd[1796]: time="2025-01-30T13:55:31.251948862Z" level=info msg="Ensure that sandbox 1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a in task-service has been cleanup successfully" Jan 30 13:55:31.252023 containerd[1796]: time="2025-01-30T13:55:31.252016155Z" level=info msg="TearDown network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" successfully" Jan 30 13:55:31.252051 containerd[1796]: time="2025-01-30T13:55:31.252023518Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" returns successfully" Jan 30 13:55:31.252131 containerd[1796]: time="2025-01-30T13:55:31.252122356Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:55:31.252176 containerd[1796]: time="2025-01-30T13:55:31.252169192Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:55:31.252198 containerd[1796]: time="2025-01-30T13:55:31.252176624Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:55:31.252349 containerd[1796]: time="2025-01-30T13:55:31.252339198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:55:31.286936 containerd[1796]: time="2025-01-30T13:55:31.286883048Z" level=error msg="Failed to destroy network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.287270 containerd[1796]: time="2025-01-30T13:55:31.287249385Z" level=error msg="encountered an error cleaning up failed sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.287402 containerd[1796]: time="2025-01-30T13:55:31.287298102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.287508 kubelet[3109]: E0130 13:55:31.287484 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.287542 kubelet[3109]: E0130 13:55:31.287525 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:31.287560 kubelet[3109]: E0130 13:55:31.287540 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:31.287584 kubelet[3109]: E0130 13:55:31.287569 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" podUID="111d62eb-6a22-4903-83b5-b0f05dac736f" Jan 30 13:55:31.288265 containerd[1796]: time="2025-01-30T13:55:31.288243753Z" level=error msg="Failed to destroy network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.288486 containerd[1796]: time="2025-01-30T13:55:31.288466486Z" level=error msg="encountered an error cleaning up failed sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.288526 containerd[1796]: time="2025-01-30T13:55:31.288510106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.288776 kubelet[3109]: E0130 13:55:31.288641 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.288776 kubelet[3109]: E0130 13:55:31.288686 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:31.288776 kubelet[3109]: E0130 13:55:31.288707 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:31.288887 kubelet[3109]: E0130 13:55:31.288742 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" podUID="8e5acc49-b227-4ac4-a04e-929de29daecb" Jan 30 13:55:31.290481 containerd[1796]: time="2025-01-30T13:55:31.290458535Z" level=error msg="Failed to destroy network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290551 containerd[1796]: time="2025-01-30T13:55:31.290494901Z" level=error msg="Failed to destroy network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290551 containerd[1796]: time="2025-01-30T13:55:31.290521141Z" level=error msg="Failed to destroy network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290677 containerd[1796]: time="2025-01-30T13:55:31.290664838Z" level=error msg="encountered an error cleaning up failed sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290715 containerd[1796]: time="2025-01-30T13:55:31.290680733Z" level=error msg="encountered an error cleaning up failed sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290715 containerd[1796]: time="2025-01-30T13:55:31.290695512Z" level=error msg="encountered an error cleaning up failed sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290759 containerd[1796]: time="2025-01-30T13:55:31.290715968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290803 containerd[1796]: time="2025-01-30T13:55:31.290725794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290823 containerd[1796]: time="2025-01-30T13:55:31.290698264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290852 kubelet[3109]: E0130 13:55:31.290836 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290883 kubelet[3109]: E0130 13:55:31.290836 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290883 kubelet[3109]: E0130 13:55:31.290867 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:31.290883 kubelet[3109]: E0130 13:55:31.290874 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:31.290939 kubelet[3109]: E0130 13:55:31.290887 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:31.290939 kubelet[3109]: E0130 13:55:31.290885 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.290939 kubelet[3109]: E0130 13:55:31.290910 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:31.291016 kubelet[3109]: E0130 13:55:31.290912 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:31.291016 kubelet[3109]: E0130 13:55:31.290923 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:31.291016 kubelet[3109]: E0130 13:55:31.290888 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:31.291119 kubelet[3109]: E0130 13:55:31.290940 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" podUID="378b5e35-c026-4506-a582-fb431d551682" Jan 30 13:55:31.291119 kubelet[3109]: E0130 13:55:31.290943 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k9ngp" podUID="f6858660-a650-44b5-8920-2ec81bb1b138" Jan 30 13:55:31.291242 containerd[1796]: time="2025-01-30T13:55:31.291226947Z" level=error msg="Failed to destroy network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.291376 containerd[1796]: time="2025-01-30T13:55:31.291358334Z" level=error msg="encountered an error cleaning up failed sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.291409 containerd[1796]: time="2025-01-30T13:55:31.291385978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.291468 kubelet[3109]: E0130 13:55:31.291457 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:31.291505 kubelet[3109]: E0130 13:55:31.291472 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:31.291505 kubelet[3109]: E0130 13:55:31.291480 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:31.291505 kubelet[3109]: E0130 13:55:31.291496 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2f94w" podUID="07922a7a-83b4-4d16-85d7-30bdc2b6b793" Jan 30 13:55:31.840673 systemd[1]: run-netns-cni\x2db72114b7\x2d2ca7\x2d23cf\x2d86ae\x2dd8edb7f5a3e6.mount: Deactivated successfully. Jan 30 13:55:31.840925 systemd[1]: run-netns-cni\x2debd31fca\x2dc4fc\x2d3b90\x2d83d4\x2dced1b19de3b4.mount: Deactivated successfully. Jan 30 13:55:31.841116 systemd[1]: run-netns-cni\x2d16fc252e\x2deab0\x2d4acd\x2d5c31\x2d1f14c23c0838.mount: Deactivated successfully. Jan 30 13:55:31.841288 systemd[1]: run-netns-cni\x2d90d3e1ca\x2d4063\x2d159b\x2d94cb\x2d3c0270e44ac5.mount: Deactivated successfully. Jan 30 13:55:31.841494 systemd[1]: run-netns-cni\x2d38fb3a56\x2d2431\x2dd7d7\x2d9367\x2d888e999d40f5.mount: Deactivated successfully. Jan 30 13:55:32.253933 kubelet[3109]: I0130 13:55:32.253910 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9" Jan 30 13:55:32.254290 containerd[1796]: time="2025-01-30T13:55:32.254267735Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" Jan 30 13:55:32.254501 containerd[1796]: time="2025-01-30T13:55:32.254442072Z" level=info msg="Ensure that sandbox 03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9 in task-service has been cleanup successfully" Jan 30 13:55:32.254627 containerd[1796]: time="2025-01-30T13:55:32.254577826Z" level=info msg="TearDown network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" successfully" Jan 30 13:55:32.254627 containerd[1796]: time="2025-01-30T13:55:32.254596284Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" returns successfully" Jan 30 13:55:32.254797 containerd[1796]: time="2025-01-30T13:55:32.254778423Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:55:32.254913 kubelet[3109]: I0130 13:55:32.254874 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f" Jan 30 13:55:32.254961 containerd[1796]: time="2025-01-30T13:55:32.254863418Z" level=info msg="TearDown network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" successfully" Jan 30 13:55:32.254961 containerd[1796]: time="2025-01-30T13:55:32.254910705Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" returns successfully" Jan 30 13:55:32.255067 containerd[1796]: time="2025-01-30T13:55:32.255051717Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:55:32.255146 containerd[1796]: time="2025-01-30T13:55:32.255116146Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:55:32.255180 containerd[1796]: time="2025-01-30T13:55:32.255146618Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:55:32.255263 containerd[1796]: time="2025-01-30T13:55:32.255248025Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" Jan 30 13:55:32.255401 containerd[1796]: time="2025-01-30T13:55:32.255387937Z" level=info msg="Ensure that sandbox 5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f in task-service has been cleanup successfully" Jan 30 13:55:32.255451 containerd[1796]: time="2025-01-30T13:55:32.255438075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:55:32.255541 containerd[1796]: time="2025-01-30T13:55:32.255527489Z" level=info msg="TearDown network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" successfully" Jan 30 13:55:32.255568 containerd[1796]: time="2025-01-30T13:55:32.255541616Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" returns successfully" Jan 30 13:55:32.255759 containerd[1796]: time="2025-01-30T13:55:32.255740435Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:55:32.255821 containerd[1796]: time="2025-01-30T13:55:32.255806374Z" level=info msg="TearDown network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" successfully" Jan 30 13:55:32.255879 containerd[1796]: time="2025-01-30T13:55:32.255822881Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" returns successfully" Jan 30 13:55:32.255904 kubelet[3109]: I0130 13:55:32.255890 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f" Jan 30 13:55:32.256082 containerd[1796]: time="2025-01-30T13:55:32.256067037Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:55:32.256115 containerd[1796]: time="2025-01-30T13:55:32.256100912Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" Jan 30 13:55:32.256165 containerd[1796]: time="2025-01-30T13:55:32.256143688Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:55:32.256199 containerd[1796]: time="2025-01-30T13:55:32.256164356Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:55:32.256230 containerd[1796]: time="2025-01-30T13:55:32.256199533Z" level=info msg="Ensure that sandbox 6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f in task-service has been cleanup successfully" Jan 30 13:55:32.256287 containerd[1796]: time="2025-01-30T13:55:32.256278054Z" level=info msg="TearDown network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" successfully" Jan 30 13:55:32.256287 containerd[1796]: time="2025-01-30T13:55:32.256285449Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" returns successfully" Jan 30 13:55:32.256347 containerd[1796]: time="2025-01-30T13:55:32.256331133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:3,}" Jan 30 13:55:32.256479 containerd[1796]: time="2025-01-30T13:55:32.256467362Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:55:32.256523 containerd[1796]: time="2025-01-30T13:55:32.256507159Z" level=info msg="TearDown network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" successfully" Jan 30 13:55:32.256523 containerd[1796]: time="2025-01-30T13:55:32.256513632Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" returns successfully" Jan 30 13:55:32.256680 containerd[1796]: time="2025-01-30T13:55:32.256669862Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:55:32.256726 containerd[1796]: time="2025-01-30T13:55:32.256716489Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:55:32.256746 containerd[1796]: time="2025-01-30T13:55:32.256726735Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:55:32.256765 kubelet[3109]: I0130 13:55:32.256736 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2" Jan 30 13:55:32.256779 systemd[1]: run-netns-cni\x2de1d9acca\x2d02d7\x2d5f39\x2d6c89\x2d7054555c30ec.mount: Deactivated successfully. Jan 30 13:55:32.256910 containerd[1796]: time="2025-01-30T13:55:32.256896950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:3,}" Jan 30 13:55:32.256931 containerd[1796]: time="2025-01-30T13:55:32.256920378Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" Jan 30 13:55:32.257020 containerd[1796]: time="2025-01-30T13:55:32.257009883Z" level=info msg="Ensure that sandbox 7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2 in task-service has been cleanup successfully" Jan 30 13:55:32.257112 containerd[1796]: time="2025-01-30T13:55:32.257100690Z" level=info msg="TearDown network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" successfully" Jan 30 13:55:32.257144 containerd[1796]: time="2025-01-30T13:55:32.257111782Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" returns successfully" Jan 30 13:55:32.257261 containerd[1796]: time="2025-01-30T13:55:32.257249678Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:55:32.257324 containerd[1796]: time="2025-01-30T13:55:32.257299795Z" level=info msg="TearDown network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" successfully" Jan 30 13:55:32.257346 containerd[1796]: time="2025-01-30T13:55:32.257323485Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" returns successfully" Jan 30 13:55:32.257446 containerd[1796]: time="2025-01-30T13:55:32.257433485Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:55:32.257470 kubelet[3109]: I0130 13:55:32.257443 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3" Jan 30 13:55:32.257502 containerd[1796]: time="2025-01-30T13:55:32.257491263Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:55:32.257532 containerd[1796]: time="2025-01-30T13:55:32.257502561Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:55:32.257654 containerd[1796]: time="2025-01-30T13:55:32.257641655Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" Jan 30 13:55:32.257721 containerd[1796]: time="2025-01-30T13:55:32.257711613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:55:32.257772 containerd[1796]: time="2025-01-30T13:55:32.257760413Z" level=info msg="Ensure that sandbox 9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3 in task-service has been cleanup successfully" Jan 30 13:55:32.257867 containerd[1796]: time="2025-01-30T13:55:32.257855178Z" level=info msg="TearDown network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" successfully" Jan 30 13:55:32.257898 containerd[1796]: time="2025-01-30T13:55:32.257866713Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" returns successfully" Jan 30 13:55:32.257981 containerd[1796]: time="2025-01-30T13:55:32.257970100Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:55:32.258010 kubelet[3109]: I0130 13:55:32.258003 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f" Jan 30 13:55:32.258031 containerd[1796]: time="2025-01-30T13:55:32.258019737Z" level=info msg="TearDown network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" successfully" Jan 30 13:55:32.258052 containerd[1796]: time="2025-01-30T13:55:32.258029030Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" returns successfully" Jan 30 13:55:32.258164 containerd[1796]: time="2025-01-30T13:55:32.258153902Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:55:32.258210 containerd[1796]: time="2025-01-30T13:55:32.258199408Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" Jan 30 13:55:32.258307 containerd[1796]: time="2025-01-30T13:55:32.258201278Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:55:32.258337 containerd[1796]: time="2025-01-30T13:55:32.258301818Z" level=info msg="Ensure that sandbox efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f in task-service has been cleanup successfully" Jan 30 13:55:32.258386 containerd[1796]: time="2025-01-30T13:55:32.258307005Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:55:32.258418 containerd[1796]: time="2025-01-30T13:55:32.258405397Z" level=info msg="TearDown network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" successfully" Jan 30 13:55:32.258448 containerd[1796]: time="2025-01-30T13:55:32.258415930Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" returns successfully" Jan 30 13:55:32.258582 containerd[1796]: time="2025-01-30T13:55:32.258568434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:3,}" Jan 30 13:55:32.258622 containerd[1796]: time="2025-01-30T13:55:32.258601966Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:55:32.258655 containerd[1796]: time="2025-01-30T13:55:32.258646609Z" level=info msg="TearDown network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" successfully" Jan 30 13:55:32.258684 containerd[1796]: time="2025-01-30T13:55:32.258655509Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" returns successfully" Jan 30 13:55:32.258761 systemd[1]: run-netns-cni\x2df49d0de0\x2d2c42\x2de957\x2d7c5d\x2dc858b4559598.mount: Deactivated successfully. Jan 30 13:55:32.258814 systemd[1]: run-netns-cni\x2d3cb815d4\x2def39\x2d6676\x2d6a8d\x2d66babde428fc.mount: Deactivated successfully. Jan 30 13:55:32.258848 containerd[1796]: time="2025-01-30T13:55:32.258837968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:2,}" Jan 30 13:55:32.258851 systemd[1]: run-netns-cni\x2d05b6dcc1\x2d7955\x2d3372\x2d3951\x2d5ebd1a383240.mount: Deactivated successfully. Jan 30 13:55:32.260773 systemd[1]: run-netns-cni\x2d9f2ce95e\x2d61ac\x2ddf4d\x2d95ca\x2d74285c0b4766.mount: Deactivated successfully. Jan 30 13:55:32.260834 systemd[1]: run-netns-cni\x2d6b785e11\x2d0f16\x2db768\x2df1f4\x2dda18bb7d07a1.mount: Deactivated successfully. Jan 30 13:55:32.295097 containerd[1796]: time="2025-01-30T13:55:32.295069007Z" level=error msg="Failed to destroy network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.295280 containerd[1796]: time="2025-01-30T13:55:32.295265767Z" level=error msg="encountered an error cleaning up failed sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.295342 containerd[1796]: time="2025-01-30T13:55:32.295326645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.295617 kubelet[3109]: E0130 13:55:32.295557 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.295691 kubelet[3109]: E0130 13:55:32.295651 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:32.295691 kubelet[3109]: E0130 13:55:32.295682 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:32.295756 kubelet[3109]: E0130 13:55:32.295730 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k9ngp" podUID="f6858660-a650-44b5-8920-2ec81bb1b138" Jan 30 13:55:32.297063 containerd[1796]: time="2025-01-30T13:55:32.297040138Z" level=error msg="Failed to destroy network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.297250 containerd[1796]: time="2025-01-30T13:55:32.297237628Z" level=error msg="encountered an error cleaning up failed sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.297286 containerd[1796]: time="2025-01-30T13:55:32.297276471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.297420 kubelet[3109]: E0130 13:55:32.297398 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.297463 kubelet[3109]: E0130 13:55:32.297453 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:32.297482 kubelet[3109]: E0130 13:55:32.297467 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:32.297506 kubelet[3109]: E0130 13:55:32.297491 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" podUID="111d62eb-6a22-4903-83b5-b0f05dac736f" Jan 30 13:55:32.301973 containerd[1796]: time="2025-01-30T13:55:32.301859507Z" level=error msg="Failed to destroy network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302277 containerd[1796]: time="2025-01-30T13:55:32.302253412Z" level=error msg="encountered an error cleaning up failed sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302338 containerd[1796]: time="2025-01-30T13:55:32.302323048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302382 containerd[1796]: time="2025-01-30T13:55:32.302324106Z" level=error msg="Failed to destroy network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302456 containerd[1796]: time="2025-01-30T13:55:32.302439977Z" level=error msg="Failed to destroy network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302501 kubelet[3109]: E0130 13:55:32.302473 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302550 kubelet[3109]: E0130 13:55:32.302523 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:32.302550 kubelet[3109]: E0130 13:55:32.302544 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:32.302612 kubelet[3109]: E0130 13:55:32.302583 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" podUID="8e5acc49-b227-4ac4-a04e-929de29daecb" Jan 30 13:55:32.302648 containerd[1796]: time="2025-01-30T13:55:32.302541284Z" level=error msg="encountered an error cleaning up failed sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302648 containerd[1796]: time="2025-01-30T13:55:32.302579759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302648 containerd[1796]: time="2025-01-30T13:55:32.302585881Z" level=error msg="encountered an error cleaning up failed sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302648 containerd[1796]: time="2025-01-30T13:55:32.302641373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302731 kubelet[3109]: E0130 13:55:32.302700 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302731 kubelet[3109]: E0130 13:55:32.302724 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:32.302767 kubelet[3109]: E0130 13:55:32.302735 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:32.302767 kubelet[3109]: E0130 13:55:32.302753 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:32.302767 kubelet[3109]: E0130 13:55:32.302703 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.302827 kubelet[3109]: E0130 13:55:32.302772 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:32.302827 kubelet[3109]: E0130 13:55:32.302781 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:32.302827 kubelet[3109]: E0130 13:55:32.302793 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" podUID="378b5e35-c026-4506-a582-fb431d551682" Jan 30 13:55:32.303010 containerd[1796]: time="2025-01-30T13:55:32.302997664Z" level=error msg="Failed to destroy network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.303135 containerd[1796]: time="2025-01-30T13:55:32.303124359Z" level=error msg="encountered an error cleaning up failed sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.303157 containerd[1796]: time="2025-01-30T13:55:32.303146259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.303210 kubelet[3109]: E0130 13:55:32.303199 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:32.303229 kubelet[3109]: E0130 13:55:32.303216 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:32.303246 kubelet[3109]: E0130 13:55:32.303227 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:32.303265 kubelet[3109]: E0130 13:55:32.303244 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2f94w" podUID="07922a7a-83b4-4d16-85d7-30bdc2b6b793" Jan 30 13:55:32.830994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef-shm.mount: Deactivated successfully. Jan 30 13:55:33.266189 kubelet[3109]: I0130 13:55:33.266165 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1" Jan 30 13:55:33.266656 containerd[1796]: time="2025-01-30T13:55:33.266630213Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\"" Jan 30 13:55:33.266908 containerd[1796]: time="2025-01-30T13:55:33.266852195Z" level=info msg="Ensure that sandbox 25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1 in task-service has been cleanup successfully" Jan 30 13:55:33.267113 containerd[1796]: time="2025-01-30T13:55:33.267088967Z" level=info msg="TearDown network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" successfully" Jan 30 13:55:33.267212 containerd[1796]: time="2025-01-30T13:55:33.267113206Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" returns successfully" Jan 30 13:55:33.267493 containerd[1796]: time="2025-01-30T13:55:33.267469657Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" Jan 30 13:55:33.267597 containerd[1796]: time="2025-01-30T13:55:33.267576676Z" level=info msg="TearDown network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" successfully" Jan 30 13:55:33.267650 containerd[1796]: time="2025-01-30T13:55:33.267596514Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" returns successfully" Jan 30 13:55:33.267836 kubelet[3109]: I0130 13:55:33.267819 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7" Jan 30 13:55:33.267911 containerd[1796]: time="2025-01-30T13:55:33.267817220Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:55:33.267911 containerd[1796]: time="2025-01-30T13:55:33.267896882Z" level=info msg="TearDown network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" successfully" Jan 30 13:55:33.267911 containerd[1796]: time="2025-01-30T13:55:33.267909861Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" returns successfully" Jan 30 13:55:33.268255 containerd[1796]: time="2025-01-30T13:55:33.268232815Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\"" Jan 30 13:55:33.268344 containerd[1796]: time="2025-01-30T13:55:33.268238481Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:55:33.268448 containerd[1796]: time="2025-01-30T13:55:33.268414753Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:55:33.268515 containerd[1796]: time="2025-01-30T13:55:33.268444137Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:55:33.268515 containerd[1796]: time="2025-01-30T13:55:33.268486661Z" level=info msg="Ensure that sandbox 62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7 in task-service has been cleanup successfully" Jan 30 13:55:33.268701 containerd[1796]: time="2025-01-30T13:55:33.268679107Z" level=info msg="TearDown network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" successfully" Jan 30 13:55:33.268748 containerd[1796]: time="2025-01-30T13:55:33.268700694Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" returns successfully" Jan 30 13:55:33.268900 containerd[1796]: time="2025-01-30T13:55:33.268873872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:55:33.268972 containerd[1796]: time="2025-01-30T13:55:33.268950622Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" Jan 30 13:55:33.269088 containerd[1796]: time="2025-01-30T13:55:33.269066718Z" level=info msg="TearDown network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" successfully" Jan 30 13:55:33.269159 containerd[1796]: time="2025-01-30T13:55:33.269086845Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" returns successfully" Jan 30 13:55:33.269341 kubelet[3109]: I0130 13:55:33.269320 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e" Jan 30 13:55:33.269443 containerd[1796]: time="2025-01-30T13:55:33.269410316Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:55:33.269551 containerd[1796]: time="2025-01-30T13:55:33.269528650Z" level=info msg="TearDown network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" successfully" Jan 30 13:55:33.269600 containerd[1796]: time="2025-01-30T13:55:33.269553134Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" returns successfully" Jan 30 13:55:33.269845 containerd[1796]: time="2025-01-30T13:55:33.269824074Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\"" Jan 30 13:55:33.269903 containerd[1796]: time="2025-01-30T13:55:33.269832002Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:55:33.269984 containerd[1796]: time="2025-01-30T13:55:33.269959781Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:55:33.270038 containerd[1796]: time="2025-01-30T13:55:33.269986773Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:55:33.270079 containerd[1796]: time="2025-01-30T13:55:33.270063234Z" level=info msg="Ensure that sandbox 2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e in task-service has been cleanup successfully" Jan 30 13:55:33.270271 containerd[1796]: time="2025-01-30T13:55:33.270247292Z" level=info msg="TearDown network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" successfully" Jan 30 13:55:33.270323 containerd[1796]: time="2025-01-30T13:55:33.270273143Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" returns successfully" Jan 30 13:55:33.270382 systemd[1]: run-netns-cni\x2d1ec79a46\x2d8f6b\x2d40f8\x2d592f\x2dd307773154aa.mount: Deactivated successfully. Jan 30 13:55:33.270637 containerd[1796]: time="2025-01-30T13:55:33.270382398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:4,}" Jan 30 13:55:33.270637 containerd[1796]: time="2025-01-30T13:55:33.270614750Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" Jan 30 13:55:33.270740 containerd[1796]: time="2025-01-30T13:55:33.270714222Z" level=info msg="TearDown network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" successfully" Jan 30 13:55:33.270797 containerd[1796]: time="2025-01-30T13:55:33.270736355Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" returns successfully" Jan 30 13:55:33.271020 containerd[1796]: time="2025-01-30T13:55:33.270991885Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:55:33.271117 kubelet[3109]: I0130 13:55:33.271094 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef" Jan 30 13:55:33.271200 containerd[1796]: time="2025-01-30T13:55:33.271107164Z" level=info msg="TearDown network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" successfully" Jan 30 13:55:33.271200 containerd[1796]: time="2025-01-30T13:55:33.271129930Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" returns successfully" Jan 30 13:55:33.271542 containerd[1796]: time="2025-01-30T13:55:33.271515052Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\"" Jan 30 13:55:33.271620 containerd[1796]: time="2025-01-30T13:55:33.271568754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:3,}" Jan 30 13:55:33.271757 containerd[1796]: time="2025-01-30T13:55:33.271734133Z" level=info msg="Ensure that sandbox 2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef in task-service has been cleanup successfully" Jan 30 13:55:33.271934 containerd[1796]: time="2025-01-30T13:55:33.271910408Z" level=info msg="TearDown network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" successfully" Jan 30 13:55:33.272013 containerd[1796]: time="2025-01-30T13:55:33.271935259Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" returns successfully" Jan 30 13:55:33.272241 containerd[1796]: time="2025-01-30T13:55:33.272216446Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" Jan 30 13:55:33.272335 containerd[1796]: time="2025-01-30T13:55:33.272315074Z" level=info msg="TearDown network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" successfully" Jan 30 13:55:33.272390 containerd[1796]: time="2025-01-30T13:55:33.272336483Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" returns successfully" Jan 30 13:55:33.272659 containerd[1796]: time="2025-01-30T13:55:33.272625541Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:55:33.272771 containerd[1796]: time="2025-01-30T13:55:33.272747950Z" level=info msg="TearDown network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" successfully" Jan 30 13:55:33.272844 containerd[1796]: time="2025-01-30T13:55:33.272772651Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" returns successfully" Jan 30 13:55:33.272893 kubelet[3109]: I0130 13:55:33.272855 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e" Jan 30 13:55:33.273057 containerd[1796]: time="2025-01-30T13:55:33.273034506Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:55:33.273165 containerd[1796]: time="2025-01-30T13:55:33.273144842Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:55:33.273234 containerd[1796]: time="2025-01-30T13:55:33.273163689Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:55:33.273323 containerd[1796]: time="2025-01-30T13:55:33.273297125Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\"" Jan 30 13:55:33.273585 containerd[1796]: time="2025-01-30T13:55:33.273558540Z" level=info msg="Ensure that sandbox f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e in task-service has been cleanup successfully" Jan 30 13:55:33.273654 containerd[1796]: time="2025-01-30T13:55:33.273607385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:55:33.273777 containerd[1796]: time="2025-01-30T13:55:33.273755215Z" level=info msg="TearDown network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" successfully" Jan 30 13:55:33.273846 containerd[1796]: time="2025-01-30T13:55:33.273779704Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" returns successfully" Jan 30 13:55:33.274017 containerd[1796]: time="2025-01-30T13:55:33.273995410Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" Jan 30 13:55:33.274115 containerd[1796]: time="2025-01-30T13:55:33.274097706Z" level=info msg="TearDown network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" successfully" Jan 30 13:55:33.274159 containerd[1796]: time="2025-01-30T13:55:33.274117351Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" returns successfully" Jan 30 13:55:33.274434 systemd[1]: run-netns-cni\x2d177de47c\x2dd2c1\x2d45ea\x2d5c87\x2d9e02d04b458f.mount: Deactivated successfully. Jan 30 13:55:33.274574 containerd[1796]: time="2025-01-30T13:55:33.274439819Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:55:33.274574 containerd[1796]: time="2025-01-30T13:55:33.274558325Z" level=info msg="TearDown network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" successfully" Jan 30 13:55:33.274554 systemd[1]: run-netns-cni\x2d3612dbfa\x2dce07\x2d6252\x2d73c7\x2d88375b790024.mount: Deactivated successfully. Jan 30 13:55:33.274784 kubelet[3109]: I0130 13:55:33.274627 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff" Jan 30 13:55:33.274855 containerd[1796]: time="2025-01-30T13:55:33.274580946Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" returns successfully" Jan 30 13:55:33.274855 containerd[1796]: time="2025-01-30T13:55:33.274805937Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:55:33.274678 systemd[1]: run-netns-cni\x2d70cd81dc\x2d8f9f\x2d461b\x2dd4cb\x2d47842470c6f6.mount: Deactivated successfully. Jan 30 13:55:33.274977 containerd[1796]: time="2025-01-30T13:55:33.274908818Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:55:33.274977 containerd[1796]: time="2025-01-30T13:55:33.274925896Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:55:33.275134 containerd[1796]: time="2025-01-30T13:55:33.275107421Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\"" Jan 30 13:55:33.275340 containerd[1796]: time="2025-01-30T13:55:33.275316827Z" level=info msg="Ensure that sandbox ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff in task-service has been cleanup successfully" Jan 30 13:55:33.275420 containerd[1796]: time="2025-01-30T13:55:33.275333538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:4,}" Jan 30 13:55:33.275542 containerd[1796]: time="2025-01-30T13:55:33.275517335Z" level=info msg="TearDown network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" successfully" Jan 30 13:55:33.275617 containerd[1796]: time="2025-01-30T13:55:33.275538161Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" returns successfully" Jan 30 13:55:33.275865 containerd[1796]: time="2025-01-30T13:55:33.275834970Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" Jan 30 13:55:33.275972 containerd[1796]: time="2025-01-30T13:55:33.275950047Z" level=info msg="TearDown network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" successfully" Jan 30 13:55:33.276043 containerd[1796]: time="2025-01-30T13:55:33.275973449Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" returns successfully" Jan 30 13:55:33.276141 containerd[1796]: time="2025-01-30T13:55:33.276124818Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:55:33.276183 containerd[1796]: time="2025-01-30T13:55:33.276167402Z" level=info msg="TearDown network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" successfully" Jan 30 13:55:33.276183 containerd[1796]: time="2025-01-30T13:55:33.276173677Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" returns successfully" Jan 30 13:55:33.276292 containerd[1796]: time="2025-01-30T13:55:33.276283120Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:55:33.276328 containerd[1796]: time="2025-01-30T13:55:33.276321495Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:55:33.276354 containerd[1796]: time="2025-01-30T13:55:33.276328241Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:55:33.276579 containerd[1796]: time="2025-01-30T13:55:33.276549532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:4,}" Jan 30 13:55:33.277742 systemd[1]: run-netns-cni\x2d3d2bd55d\x2d1dec\x2d1c12\x2d54c6\x2d41675c9a1bfe.mount: Deactivated successfully. Jan 30 13:55:33.277809 systemd[1]: run-netns-cni\x2d0591faad\x2de939\x2dba45\x2dad22\x2d4f2999e7fbe1.mount: Deactivated successfully. Jan 30 13:55:33.327241 containerd[1796]: time="2025-01-30T13:55:33.327206511Z" level=error msg="Failed to destroy network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327361 containerd[1796]: time="2025-01-30T13:55:33.327259052Z" level=error msg="Failed to destroy network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327480 containerd[1796]: time="2025-01-30T13:55:33.327463918Z" level=error msg="encountered an error cleaning up failed sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327527 containerd[1796]: time="2025-01-30T13:55:33.327476497Z" level=error msg="encountered an error cleaning up failed sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327527 containerd[1796]: time="2025-01-30T13:55:33.327508772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327621 containerd[1796]: time="2025-01-30T13:55:33.327517780Z" level=error msg="Failed to destroy network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327621 containerd[1796]: time="2025-01-30T13:55:33.327511684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327764 containerd[1796]: time="2025-01-30T13:55:33.327745113Z" level=error msg="encountered an error cleaning up failed sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327815 kubelet[3109]: E0130 13:55:33.327767 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327861 kubelet[3109]: E0130 13:55:33.327822 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:33.327861 kubelet[3109]: E0130 13:55:33.327843 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:33.327861 kubelet[3109]: E0130 13:55:33.327785 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.327944 kubelet[3109]: E0130 13:55:33.327879 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:33.327944 kubelet[3109]: E0130 13:55:33.327879 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k9ngp" podUID="f6858660-a650-44b5-8920-2ec81bb1b138" Jan 30 13:55:33.327944 kubelet[3109]: E0130 13:55:33.327897 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:33.328055 kubelet[3109]: E0130 13:55:33.327937 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" podUID="8e5acc49-b227-4ac4-a04e-929de29daecb" Jan 30 13:55:33.328137 containerd[1796]: time="2025-01-30T13:55:33.328118546Z" level=error msg="Failed to destroy network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.328218 containerd[1796]: time="2025-01-30T13:55:33.328128639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.328295 kubelet[3109]: E0130 13:55:33.328283 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.328344 kubelet[3109]: E0130 13:55:33.328304 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:33.328344 kubelet[3109]: E0130 13:55:33.328315 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:33.328344 kubelet[3109]: E0130 13:55:33.328335 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" podUID="378b5e35-c026-4506-a582-fb431d551682" Jan 30 13:55:33.328478 containerd[1796]: time="2025-01-30T13:55:33.328299913Z" level=error msg="encountered an error cleaning up failed sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.328478 containerd[1796]: time="2025-01-30T13:55:33.328334727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.328522 kubelet[3109]: E0130 13:55:33.328410 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.328522 kubelet[3109]: E0130 13:55:33.328437 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:33.328522 kubelet[3109]: E0130 13:55:33.328451 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:33.328578 kubelet[3109]: E0130 13:55:33.328473 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:33.329317 containerd[1796]: time="2025-01-30T13:55:33.329303430Z" level=error msg="Failed to destroy network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329470 containerd[1796]: time="2025-01-30T13:55:33.329454992Z" level=error msg="encountered an error cleaning up failed sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329510 containerd[1796]: time="2025-01-30T13:55:33.329479431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329563 kubelet[3109]: E0130 13:55:33.329549 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329600 kubelet[3109]: E0130 13:55:33.329576 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:33.329600 kubelet[3109]: E0130 13:55:33.329594 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:33.329660 kubelet[3109]: E0130 13:55:33.329621 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" podUID="111d62eb-6a22-4903-83b5-b0f05dac736f" Jan 30 13:55:33.329710 containerd[1796]: time="2025-01-30T13:55:33.329588911Z" level=error msg="Failed to destroy network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329748 containerd[1796]: time="2025-01-30T13:55:33.329735118Z" level=error msg="encountered an error cleaning up failed sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329779 containerd[1796]: time="2025-01-30T13:55:33.329757698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329855 kubelet[3109]: E0130 13:55:33.329841 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:33.329887 kubelet[3109]: E0130 13:55:33.329861 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:33.329887 kubelet[3109]: E0130 13:55:33.329871 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:33.329948 kubelet[3109]: E0130 13:55:33.329887 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2f94w" podUID="07922a7a-83b4-4d16-85d7-30bdc2b6b793" Jan 30 13:55:33.830525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61-shm.mount: Deactivated successfully. Jan 30 13:55:34.276849 kubelet[3109]: I0130 13:55:34.276834 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444" Jan 30 13:55:34.277119 containerd[1796]: time="2025-01-30T13:55:34.277103601Z" level=info msg="StopPodSandbox for \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\"" Jan 30 13:55:34.277290 containerd[1796]: time="2025-01-30T13:55:34.277258373Z" level=info msg="Ensure that sandbox 0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444 in task-service has been cleanup successfully" Jan 30 13:55:34.277379 containerd[1796]: time="2025-01-30T13:55:34.277369224Z" level=info msg="TearDown network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" successfully" Jan 30 13:55:34.277413 containerd[1796]: time="2025-01-30T13:55:34.277378205Z" level=info msg="StopPodSandbox for \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" returns successfully" Jan 30 13:55:34.277530 containerd[1796]: time="2025-01-30T13:55:34.277515607Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\"" Jan 30 13:55:34.277607 containerd[1796]: time="2025-01-30T13:55:34.277573602Z" level=info msg="TearDown network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" successfully" Jan 30 13:55:34.277635 containerd[1796]: time="2025-01-30T13:55:34.277607991Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" returns successfully" Jan 30 13:55:34.277748 containerd[1796]: time="2025-01-30T13:55:34.277735093Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" Jan 30 13:55:34.277774 kubelet[3109]: I0130 13:55:34.277741 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145" Jan 30 13:55:34.277798 containerd[1796]: time="2025-01-30T13:55:34.277787355Z" level=info msg="TearDown network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" successfully" Jan 30 13:55:34.277820 containerd[1796]: time="2025-01-30T13:55:34.277798238Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" returns successfully" Jan 30 13:55:34.277917 containerd[1796]: time="2025-01-30T13:55:34.277907902Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:55:34.277941 containerd[1796]: time="2025-01-30T13:55:34.277919759Z" level=info msg="StopPodSandbox for \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\"" Jan 30 13:55:34.277975 containerd[1796]: time="2025-01-30T13:55:34.277957756Z" level=info msg="TearDown network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" successfully" Jan 30 13:55:34.277975 containerd[1796]: time="2025-01-30T13:55:34.277967772Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" returns successfully" Jan 30 13:55:34.278050 containerd[1796]: time="2025-01-30T13:55:34.278039743Z" level=info msg="Ensure that sandbox ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145 in task-service has been cleanup successfully" Jan 30 13:55:34.278110 containerd[1796]: time="2025-01-30T13:55:34.278100077Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:55:34.278141 containerd[1796]: time="2025-01-30T13:55:34.278121190Z" level=info msg="TearDown network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" successfully" Jan 30 13:55:34.278141 containerd[1796]: time="2025-01-30T13:55:34.278128817Z" level=info msg="StopPodSandbox for \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" returns successfully" Jan 30 13:55:34.278194 containerd[1796]: time="2025-01-30T13:55:34.278148072Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:55:34.278194 containerd[1796]: time="2025-01-30T13:55:34.278158188Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:55:34.278260 containerd[1796]: time="2025-01-30T13:55:34.278249090Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\"" Jan 30 13:55:34.278309 containerd[1796]: time="2025-01-30T13:55:34.278299395Z" level=info msg="TearDown network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" successfully" Jan 30 13:55:34.278338 containerd[1796]: time="2025-01-30T13:55:34.278309151Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" returns successfully" Jan 30 13:55:34.278364 containerd[1796]: time="2025-01-30T13:55:34.278351369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:5,}" Jan 30 13:55:34.278410 containerd[1796]: time="2025-01-30T13:55:34.278400394Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" Jan 30 13:55:34.278459 containerd[1796]: time="2025-01-30T13:55:34.278450658Z" level=info msg="TearDown network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" successfully" Jan 30 13:55:34.278459 containerd[1796]: time="2025-01-30T13:55:34.278459045Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" returns successfully" Jan 30 13:55:34.278565 containerd[1796]: time="2025-01-30T13:55:34.278554929Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:55:34.278611 containerd[1796]: time="2025-01-30T13:55:34.278601142Z" level=info msg="TearDown network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" successfully" Jan 30 13:55:34.278634 containerd[1796]: time="2025-01-30T13:55:34.278612613Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" returns successfully" Jan 30 13:55:34.278728 kubelet[3109]: I0130 13:55:34.278717 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368" Jan 30 13:55:34.278761 containerd[1796]: time="2025-01-30T13:55:34.278732386Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:55:34.278818 containerd[1796]: time="2025-01-30T13:55:34.278806575Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:55:34.278847 containerd[1796]: time="2025-01-30T13:55:34.278818032Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:55:34.278950 containerd[1796]: time="2025-01-30T13:55:34.278940325Z" level=info msg="StopPodSandbox for \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\"" Jan 30 13:55:34.279001 containerd[1796]: time="2025-01-30T13:55:34.278991203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:5,}" Jan 30 13:55:34.279052 containerd[1796]: time="2025-01-30T13:55:34.279042793Z" level=info msg="Ensure that sandbox 89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368 in task-service has been cleanup successfully" Jan 30 13:55:34.279133 containerd[1796]: time="2025-01-30T13:55:34.279122347Z" level=info msg="TearDown network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" successfully" Jan 30 13:55:34.279177 containerd[1796]: time="2025-01-30T13:55:34.279132128Z" level=info msg="StopPodSandbox for \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" returns successfully" Jan 30 13:55:34.279248 systemd[1]: run-netns-cni\x2d6df7d588\x2d1db0\x2d5319\x2de241\x2d513c7af09103.mount: Deactivated successfully. Jan 30 13:55:34.279375 containerd[1796]: time="2025-01-30T13:55:34.279243312Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\"" Jan 30 13:55:34.279375 containerd[1796]: time="2025-01-30T13:55:34.279294757Z" level=info msg="TearDown network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" successfully" Jan 30 13:55:34.279375 containerd[1796]: time="2025-01-30T13:55:34.279304092Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" returns successfully" Jan 30 13:55:34.279452 containerd[1796]: time="2025-01-30T13:55:34.279397292Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" Jan 30 13:55:34.279452 containerd[1796]: time="2025-01-30T13:55:34.279440679Z" level=info msg="TearDown network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" successfully" Jan 30 13:55:34.279452 containerd[1796]: time="2025-01-30T13:55:34.279448223Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" returns successfully" Jan 30 13:55:34.279558 containerd[1796]: time="2025-01-30T13:55:34.279548257Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:55:34.279790 containerd[1796]: time="2025-01-30T13:55:34.279777778Z" level=info msg="TearDown network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" successfully" Jan 30 13:55:34.279816 containerd[1796]: time="2025-01-30T13:55:34.279791523Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" returns successfully" Jan 30 13:55:34.279959 containerd[1796]: time="2025-01-30T13:55:34.279944035Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:55:34.280005 kubelet[3109]: I0130 13:55:34.279947 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8" Jan 30 13:55:34.280039 containerd[1796]: time="2025-01-30T13:55:34.280001949Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:55:34.280067 containerd[1796]: time="2025-01-30T13:55:34.280036388Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:55:34.280483 containerd[1796]: time="2025-01-30T13:55:34.280314186Z" level=info msg="StopPodSandbox for \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\"" Jan 30 13:55:34.280722 containerd[1796]: time="2025-01-30T13:55:34.280329312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:5,}" Jan 30 13:55:34.280760 containerd[1796]: time="2025-01-30T13:55:34.280538553Z" level=info msg="Ensure that sandbox cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8 in task-service has been cleanup successfully" Jan 30 13:55:34.280874 containerd[1796]: time="2025-01-30T13:55:34.280829782Z" level=info msg="TearDown network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" successfully" Jan 30 13:55:34.280874 containerd[1796]: time="2025-01-30T13:55:34.280844295Z" level=info msg="StopPodSandbox for \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" returns successfully" Jan 30 13:55:34.281014 containerd[1796]: time="2025-01-30T13:55:34.281000167Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\"" Jan 30 13:55:34.281082 containerd[1796]: time="2025-01-30T13:55:34.281072302Z" level=info msg="TearDown network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" successfully" Jan 30 13:55:34.281112 containerd[1796]: time="2025-01-30T13:55:34.281083397Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" returns successfully" Jan 30 13:55:34.281312 containerd[1796]: time="2025-01-30T13:55:34.281303025Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" Jan 30 13:55:34.281380 containerd[1796]: time="2025-01-30T13:55:34.281357152Z" level=info msg="TearDown network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" successfully" Jan 30 13:55:34.281608 systemd[1]: run-netns-cni\x2df55f304f\x2de4b8\x2dd754\x2d767d\x2df50142e8ad53.mount: Deactivated successfully. Jan 30 13:55:34.281662 systemd[1]: run-netns-cni\x2d4317c9bf\x2d8aef\x2d293a\x2dc570\x2d27e59690b0f0.mount: Deactivated successfully. Jan 30 13:55:34.281708 containerd[1796]: time="2025-01-30T13:55:34.281612821Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" returns successfully" Jan 30 13:55:34.281778 containerd[1796]: time="2025-01-30T13:55:34.281764355Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:55:34.281846 containerd[1796]: time="2025-01-30T13:55:34.281815205Z" level=info msg="TearDown network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" successfully" Jan 30 13:55:34.281878 containerd[1796]: time="2025-01-30T13:55:34.281846449Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" returns successfully" Jan 30 13:55:34.281975 containerd[1796]: time="2025-01-30T13:55:34.281962278Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:55:34.282008 kubelet[3109]: I0130 13:55:34.281987 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61" Jan 30 13:55:34.282041 containerd[1796]: time="2025-01-30T13:55:34.282011743Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:55:34.282041 containerd[1796]: time="2025-01-30T13:55:34.282019357Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:55:34.282273 containerd[1796]: time="2025-01-30T13:55:34.282258741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:5,}" Jan 30 13:55:34.282314 containerd[1796]: time="2025-01-30T13:55:34.282279843Z" level=info msg="StopPodSandbox for \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\"" Jan 30 13:55:34.282394 containerd[1796]: time="2025-01-30T13:55:34.282385110Z" level=info msg="Ensure that sandbox 7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61 in task-service has been cleanup successfully" Jan 30 13:55:34.282479 containerd[1796]: time="2025-01-30T13:55:34.282470292Z" level=info msg="TearDown network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" successfully" Jan 30 13:55:34.282507 containerd[1796]: time="2025-01-30T13:55:34.282478900Z" level=info msg="StopPodSandbox for \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" returns successfully" Jan 30 13:55:34.282647 containerd[1796]: time="2025-01-30T13:55:34.282636370Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\"" Jan 30 13:55:34.282711 containerd[1796]: time="2025-01-30T13:55:34.282681558Z" level=info msg="TearDown network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" successfully" Jan 30 13:55:34.282739 containerd[1796]: time="2025-01-30T13:55:34.282711978Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" returns successfully" Jan 30 13:55:34.282822 containerd[1796]: time="2025-01-30T13:55:34.282810821Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" Jan 30 13:55:34.282859 containerd[1796]: time="2025-01-30T13:55:34.282852684Z" level=info msg="TearDown network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" successfully" Jan 30 13:55:34.282888 kubelet[3109]: I0130 13:55:34.282816 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b" Jan 30 13:55:34.282921 containerd[1796]: time="2025-01-30T13:55:34.282861921Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" returns successfully" Jan 30 13:55:34.282995 containerd[1796]: time="2025-01-30T13:55:34.282982194Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:55:34.283109 containerd[1796]: time="2025-01-30T13:55:34.283054135Z" level=info msg="StopPodSandbox for \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\"" Jan 30 13:55:34.283109 containerd[1796]: time="2025-01-30T13:55:34.283072564Z" level=info msg="TearDown network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" successfully" Jan 30 13:55:34.283109 containerd[1796]: time="2025-01-30T13:55:34.283083406Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" returns successfully" Jan 30 13:55:34.283207 containerd[1796]: time="2025-01-30T13:55:34.283161770Z" level=info msg="Ensure that sandbox 23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b in task-service has been cleanup successfully" Jan 30 13:55:34.283207 containerd[1796]: time="2025-01-30T13:55:34.283189681Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:55:34.283264 containerd[1796]: time="2025-01-30T13:55:34.283233733Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:55:34.283300 containerd[1796]: time="2025-01-30T13:55:34.283255122Z" level=info msg="TearDown network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" successfully" Jan 30 13:55:34.283300 containerd[1796]: time="2025-01-30T13:55:34.283271464Z" level=info msg="StopPodSandbox for \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" returns successfully" Jan 30 13:55:34.283300 containerd[1796]: time="2025-01-30T13:55:34.283261011Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:55:34.283427 containerd[1796]: time="2025-01-30T13:55:34.283408690Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\"" Jan 30 13:55:34.283462 containerd[1796]: time="2025-01-30T13:55:34.283456654Z" level=info msg="TearDown network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" successfully" Jan 30 13:55:34.283482 containerd[1796]: time="2025-01-30T13:55:34.283463325Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" returns successfully" Jan 30 13:55:34.283507 containerd[1796]: time="2025-01-30T13:55:34.283495722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:5,}" Jan 30 13:55:34.284779 containerd[1796]: time="2025-01-30T13:55:34.283565068Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" Jan 30 13:55:34.284779 containerd[1796]: time="2025-01-30T13:55:34.283602556Z" level=info msg="TearDown network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" successfully" Jan 30 13:55:34.284779 containerd[1796]: time="2025-01-30T13:55:34.283609082Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" returns successfully" Jan 30 13:55:34.284779 containerd[1796]: time="2025-01-30T13:55:34.283718555Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:55:34.284779 containerd[1796]: time="2025-01-30T13:55:34.283752764Z" level=info msg="TearDown network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" successfully" Jan 30 13:55:34.284779 containerd[1796]: time="2025-01-30T13:55:34.283758183Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" returns successfully" Jan 30 13:55:34.284779 containerd[1796]: time="2025-01-30T13:55:34.283935909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:4,}" Jan 30 13:55:34.284123 systemd[1]: run-netns-cni\x2ddbbb28cd\x2d1601\x2d8900\x2db23b\x2d82551e18f01a.mount: Deactivated successfully. Jan 30 13:55:34.284173 systemd[1]: run-netns-cni\x2d8c1fa5e9\x2d7e99\x2dbebc\x2d44e4\x2d83cc80e3fc64.mount: Deactivated successfully. Jan 30 13:55:34.286059 systemd[1]: run-netns-cni\x2dede825df\x2d43a1\x2d28e4\x2dac22\x2db35c905babc7.mount: Deactivated successfully. Jan 30 13:55:34.336619 containerd[1796]: time="2025-01-30T13:55:34.336585095Z" level=error msg="Failed to destroy network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.336830 containerd[1796]: time="2025-01-30T13:55:34.336817577Z" level=error msg="encountered an error cleaning up failed sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.336865 containerd[1796]: time="2025-01-30T13:55:34.336854260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.337031 kubelet[3109]: E0130 13:55:34.337007 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.337072 kubelet[3109]: E0130 13:55:34.337049 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:34.337072 kubelet[3109]: E0130 13:55:34.337065 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" Jan 30 13:55:34.337123 kubelet[3109]: E0130 13:55:34.337093 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-z55mr_calico-apiserver(111d62eb-6a22-4903-83b5-b0f05dac736f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" podUID="111d62eb-6a22-4903-83b5-b0f05dac736f" Jan 30 13:55:34.346545 containerd[1796]: time="2025-01-30T13:55:34.346509772Z" level=error msg="Failed to destroy network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.346771 containerd[1796]: time="2025-01-30T13:55:34.346754605Z" level=error msg="encountered an error cleaning up failed sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.346926 containerd[1796]: time="2025-01-30T13:55:34.346907359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347025 containerd[1796]: time="2025-01-30T13:55:34.346869014Z" level=error msg="Failed to destroy network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347085 kubelet[3109]: E0130 13:55:34.347063 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347124 kubelet[3109]: E0130 13:55:34.347101 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:34.347124 kubelet[3109]: E0130 13:55:34.347116 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" Jan 30 13:55:34.347187 kubelet[3109]: E0130 13:55:34.347145 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57746584d-r5fdn_calico-apiserver(378b5e35-c026-4506-a582-fb431d551682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" podUID="378b5e35-c026-4506-a582-fb431d551682" Jan 30 13:55:34.347235 containerd[1796]: time="2025-01-30T13:55:34.347123631Z" level=error msg="encountered an error cleaning up failed sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347235 containerd[1796]: time="2025-01-30T13:55:34.347157128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347282 kubelet[3109]: E0130 13:55:34.347227 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347282 kubelet[3109]: E0130 13:55:34.347251 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:34.347282 kubelet[3109]: E0130 13:55:34.347268 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k9ngp" Jan 30 13:55:34.347336 kubelet[3109]: E0130 13:55:34.347293 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k9ngp_kube-system(f6858660-a650-44b5-8920-2ec81bb1b138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k9ngp" podUID="f6858660-a650-44b5-8920-2ec81bb1b138" Jan 30 13:55:34.347373 containerd[1796]: time="2025-01-30T13:55:34.347297162Z" level=error msg="Failed to destroy network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347503 containerd[1796]: time="2025-01-30T13:55:34.347485817Z" level=error msg="encountered an error cleaning up failed sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347543 containerd[1796]: time="2025-01-30T13:55:34.347520181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347612 kubelet[3109]: E0130 13:55:34.347597 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.347645 kubelet[3109]: E0130 13:55:34.347620 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:34.347674 kubelet[3109]: E0130 13:55:34.347665 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnwwl" Jan 30 13:55:34.347735 kubelet[3109]: E0130 13:55:34.347692 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnwwl_calico-system(b153ff53-b790-4ffe-82ac-a800a8f52eef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnwwl" podUID="b153ff53-b790-4ffe-82ac-a800a8f52eef" Jan 30 13:55:34.348045 containerd[1796]: time="2025-01-30T13:55:34.348030024Z" level=error msg="Failed to destroy network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.348183 containerd[1796]: time="2025-01-30T13:55:34.348168874Z" level=error msg="encountered an error cleaning up failed sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.348222 containerd[1796]: time="2025-01-30T13:55:34.348195669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.348269 kubelet[3109]: E0130 13:55:34.348258 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.348299 kubelet[3109]: E0130 13:55:34.348277 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:34.348299 kubelet[3109]: E0130 13:55:34.348288 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" Jan 30 13:55:34.348339 kubelet[3109]: E0130 13:55:34.348309 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57994df9cf-cnldt_calico-system(8e5acc49-b227-4ac4-a04e-929de29daecb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" podUID="8e5acc49-b227-4ac4-a04e-929de29daecb" Jan 30 13:55:34.350812 containerd[1796]: time="2025-01-30T13:55:34.350792031Z" level=error msg="Failed to destroy network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.350965 containerd[1796]: time="2025-01-30T13:55:34.350952094Z" level=error msg="encountered an error cleaning up failed sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.351006 containerd[1796]: time="2025-01-30T13:55:34.350982699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.351122 kubelet[3109]: E0130 13:55:34.351106 3109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:55:34.351152 kubelet[3109]: E0130 13:55:34.351134 3109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:34.351152 kubelet[3109]: E0130 13:55:34.351146 3109 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2f94w" Jan 30 13:55:34.351187 kubelet[3109]: E0130 13:55:34.351166 3109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2f94w_kube-system(07922a7a-83b4-4d16-85d7-30bdc2b6b793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2f94w" podUID="07922a7a-83b4-4d16-85d7-30bdc2b6b793" Jan 30 13:55:35.072772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539310189.mount: Deactivated successfully. Jan 30 13:55:35.085956 containerd[1796]: time="2025-01-30T13:55:35.085908677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:35.086174 containerd[1796]: time="2025-01-30T13:55:35.086126064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:55:35.086379 containerd[1796]: time="2025-01-30T13:55:35.086339336Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:35.087334 containerd[1796]: time="2025-01-30T13:55:35.087291201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:35.087665 containerd[1796]: time="2025-01-30T13:55:35.087625132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.850985988s" Jan 30 13:55:35.087665 containerd[1796]: time="2025-01-30T13:55:35.087639359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:55:35.091065 containerd[1796]: time="2025-01-30T13:55:35.091049829Z" level=info msg="CreateContainer within sandbox \"b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:55:35.103137 containerd[1796]: time="2025-01-30T13:55:35.103093696Z" level=info msg="CreateContainer within sandbox \"b5913dce7236356cb8d8fc5cf93467150ae190c4f655231b659c816ff02ea064\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"118042ea164ba776674698fa013bb08c70fa85392b84931b1b2007a40aab4d9c\"" Jan 30 13:55:35.103330 containerd[1796]: time="2025-01-30T13:55:35.103319413Z" level=info msg="StartContainer for \"118042ea164ba776674698fa013bb08c70fa85392b84931b1b2007a40aab4d9c\"" Jan 30 13:55:35.122756 systemd[1]: Started cri-containerd-118042ea164ba776674698fa013bb08c70fa85392b84931b1b2007a40aab4d9c.scope - libcontainer container 118042ea164ba776674698fa013bb08c70fa85392b84931b1b2007a40aab4d9c. Jan 30 13:55:35.139133 containerd[1796]: time="2025-01-30T13:55:35.139107533Z" level=info msg="StartContainer for \"118042ea164ba776674698fa013bb08c70fa85392b84931b1b2007a40aab4d9c\" returns successfully" Jan 30 13:55:35.201404 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:55:35.201464 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:55:35.287270 kubelet[3109]: I0130 13:55:35.287242 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93" Jan 30 13:55:35.287817 containerd[1796]: time="2025-01-30T13:55:35.287787944Z" level=info msg="StopPodSandbox for \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\"" Jan 30 13:55:35.288121 containerd[1796]: time="2025-01-30T13:55:35.288053799Z" level=info msg="Ensure that sandbox 7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93 in task-service has been cleanup successfully" Jan 30 13:55:35.288257 containerd[1796]: time="2025-01-30T13:55:35.288237401Z" level=info msg="TearDown network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\" successfully" Jan 30 13:55:35.288314 containerd[1796]: time="2025-01-30T13:55:35.288257687Z" level=info msg="StopPodSandbox for \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\" returns successfully" Jan 30 13:55:35.288499 containerd[1796]: time="2025-01-30T13:55:35.288476657Z" level=info msg="StopPodSandbox for \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\"" Jan 30 13:55:35.288577 containerd[1796]: time="2025-01-30T13:55:35.288561176Z" level=info msg="TearDown network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" successfully" Jan 30 13:55:35.288577 containerd[1796]: time="2025-01-30T13:55:35.288575641Z" level=info msg="StopPodSandbox for \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" returns successfully" Jan 30 13:55:35.288859 containerd[1796]: time="2025-01-30T13:55:35.288840276Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\"" Jan 30 13:55:35.288944 containerd[1796]: time="2025-01-30T13:55:35.288912058Z" level=info msg="TearDown network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" successfully" Jan 30 13:55:35.288944 containerd[1796]: time="2025-01-30T13:55:35.288924672Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" returns successfully" Jan 30 13:55:35.289173 containerd[1796]: time="2025-01-30T13:55:35.289153836Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" Jan 30 13:55:35.289245 containerd[1796]: time="2025-01-30T13:55:35.289231675Z" level=info msg="TearDown network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" successfully" Jan 30 13:55:35.289281 containerd[1796]: time="2025-01-30T13:55:35.289245283Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" returns successfully" Jan 30 13:55:35.289486 kubelet[3109]: I0130 13:55:35.289421 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314" Jan 30 13:55:35.289556 containerd[1796]: time="2025-01-30T13:55:35.289433943Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:55:35.289606 containerd[1796]: time="2025-01-30T13:55:35.289529813Z" level=info msg="TearDown network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" successfully" Jan 30 13:55:35.289606 containerd[1796]: time="2025-01-30T13:55:35.289589427Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" returns successfully" Jan 30 13:55:35.289812 containerd[1796]: time="2025-01-30T13:55:35.289781046Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:55:35.289978 containerd[1796]: time="2025-01-30T13:55:35.289869672Z" level=info msg="StopPodSandbox for \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\"" Jan 30 13:55:35.290173 containerd[1796]: time="2025-01-30T13:55:35.289888773Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:55:35.290241 containerd[1796]: time="2025-01-30T13:55:35.290176647Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:55:35.290758 containerd[1796]: time="2025-01-30T13:55:35.290552662Z" level=info msg="Ensure that sandbox 2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314 in task-service has been cleanup successfully" Jan 30 13:55:35.290758 containerd[1796]: time="2025-01-30T13:55:35.290724839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:6,}" Jan 30 13:55:35.290969 containerd[1796]: time="2025-01-30T13:55:35.290942993Z" level=info msg="TearDown network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\" successfully" Jan 30 13:55:35.291044 containerd[1796]: time="2025-01-30T13:55:35.290966143Z" level=info msg="StopPodSandbox for \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\" returns successfully" Jan 30 13:55:35.291885 containerd[1796]: time="2025-01-30T13:55:35.291867929Z" level=info msg="StopPodSandbox for \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\"" Jan 30 13:55:35.291956 containerd[1796]: time="2025-01-30T13:55:35.291927365Z" level=info msg="TearDown network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" successfully" Jan 30 13:55:35.292006 containerd[1796]: time="2025-01-30T13:55:35.291955830Z" level=info msg="StopPodSandbox for \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" returns successfully" Jan 30 13:55:35.292133 containerd[1796]: time="2025-01-30T13:55:35.292116676Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\"" Jan 30 13:55:35.292225 containerd[1796]: time="2025-01-30T13:55:35.292193219Z" level=info msg="TearDown network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" successfully" Jan 30 13:55:35.292266 containerd[1796]: time="2025-01-30T13:55:35.292226034Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" returns successfully" Jan 30 13:55:35.292375 kubelet[3109]: I0130 13:55:35.292362 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9" Jan 30 13:55:35.292441 containerd[1796]: time="2025-01-30T13:55:35.292374379Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" Jan 30 13:55:35.292477 containerd[1796]: time="2025-01-30T13:55:35.292447382Z" level=info msg="TearDown network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" successfully" Jan 30 13:55:35.292477 containerd[1796]: time="2025-01-30T13:55:35.292455457Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" returns successfully" Jan 30 13:55:35.292580 containerd[1796]: time="2025-01-30T13:55:35.292566851Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:55:35.292643 containerd[1796]: time="2025-01-30T13:55:35.292613854Z" level=info msg="TearDown network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" successfully" Jan 30 13:55:35.292677 containerd[1796]: time="2025-01-30T13:55:35.292643675Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" returns successfully" Jan 30 13:55:35.292709 containerd[1796]: time="2025-01-30T13:55:35.292698189Z" level=info msg="StopPodSandbox for \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\"" Jan 30 13:55:35.292762 containerd[1796]: time="2025-01-30T13:55:35.292751868Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:55:35.292812 containerd[1796]: time="2025-01-30T13:55:35.292793170Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:55:35.292845 containerd[1796]: time="2025-01-30T13:55:35.292811759Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:55:35.292845 containerd[1796]: time="2025-01-30T13:55:35.292825035Z" level=info msg="Ensure that sandbox e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9 in task-service has been cleanup successfully" Jan 30 13:55:35.292940 containerd[1796]: time="2025-01-30T13:55:35.292929667Z" level=info msg="TearDown network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\" successfully" Jan 30 13:55:35.292972 containerd[1796]: time="2025-01-30T13:55:35.292939002Z" level=info msg="StopPodSandbox for \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\" returns successfully" Jan 30 13:55:35.293002 containerd[1796]: time="2025-01-30T13:55:35.292993397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:6,}" Jan 30 13:55:35.293052 containerd[1796]: time="2025-01-30T13:55:35.293041938Z" level=info msg="StopPodSandbox for \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\"" Jan 30 13:55:35.293098 containerd[1796]: time="2025-01-30T13:55:35.293090515Z" level=info msg="TearDown network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" successfully" Jan 30 13:55:35.293120 containerd[1796]: time="2025-01-30T13:55:35.293098544Z" level=info msg="StopPodSandbox for \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" returns successfully" Jan 30 13:55:35.293206 containerd[1796]: time="2025-01-30T13:55:35.293197729Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\"" Jan 30 13:55:35.293239 containerd[1796]: time="2025-01-30T13:55:35.293233045Z" level=info msg="TearDown network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" successfully" Jan 30 13:55:35.293261 containerd[1796]: time="2025-01-30T13:55:35.293239125Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" returns successfully" Jan 30 13:55:35.293356 containerd[1796]: time="2025-01-30T13:55:35.293345306Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" Jan 30 13:55:35.293406 containerd[1796]: time="2025-01-30T13:55:35.293397523Z" level=info msg="TearDown network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" successfully" Jan 30 13:55:35.293441 containerd[1796]: time="2025-01-30T13:55:35.293405936Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" returns successfully" Jan 30 13:55:35.293533 containerd[1796]: time="2025-01-30T13:55:35.293521540Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:55:35.293578 containerd[1796]: time="2025-01-30T13:55:35.293566243Z" level=info msg="TearDown network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" successfully" Jan 30 13:55:35.293578 containerd[1796]: time="2025-01-30T13:55:35.293572696Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" returns successfully" Jan 30 13:55:35.293779 containerd[1796]: time="2025-01-30T13:55:35.293767891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:5,}" Jan 30 13:55:35.294474 kubelet[3109]: I0130 13:55:35.294462 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62" Jan 30 13:55:35.294712 containerd[1796]: time="2025-01-30T13:55:35.294690127Z" level=info msg="StopPodSandbox for \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\"" Jan 30 13:55:35.294828 containerd[1796]: time="2025-01-30T13:55:35.294817233Z" level=info msg="Ensure that sandbox 221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62 in task-service has been cleanup successfully" Jan 30 13:55:35.294938 containerd[1796]: time="2025-01-30T13:55:35.294924272Z" level=info msg="TearDown network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\" successfully" Jan 30 13:55:35.294979 containerd[1796]: time="2025-01-30T13:55:35.294937245Z" level=info msg="StopPodSandbox for \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\" returns successfully" Jan 30 13:55:35.295105 containerd[1796]: time="2025-01-30T13:55:35.295092552Z" level=info msg="StopPodSandbox for \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\"" Jan 30 13:55:35.295149 containerd[1796]: time="2025-01-30T13:55:35.295141089Z" level=info msg="TearDown network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" successfully" Jan 30 13:55:35.295149 containerd[1796]: time="2025-01-30T13:55:35.295148098Z" level=info msg="StopPodSandbox for \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" returns successfully" Jan 30 13:55:35.295279 containerd[1796]: time="2025-01-30T13:55:35.295268221Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\"" Jan 30 13:55:35.295359 containerd[1796]: time="2025-01-30T13:55:35.295349102Z" level=info msg="TearDown network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" successfully" Jan 30 13:55:35.295381 containerd[1796]: time="2025-01-30T13:55:35.295360023Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" returns successfully" Jan 30 13:55:35.295521 containerd[1796]: time="2025-01-30T13:55:35.295502326Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295563173Z" level=info msg="TearDown network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" successfully" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295572331Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" returns successfully" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295702062Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295740735Z" level=info msg="TearDown network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" successfully" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295746738Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" returns successfully" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295836608Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295871662Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295877602Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:55:35.296015 containerd[1796]: time="2025-01-30T13:55:35.295934420Z" level=info msg="StopPodSandbox for \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\"" Jan 30 13:55:35.296197 kubelet[3109]: I0130 13:55:35.295720 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8" Jan 30 13:55:35.296227 containerd[1796]: time="2025-01-30T13:55:35.296060045Z" level=info msg="Ensure that sandbox a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8 in task-service has been cleanup successfully" Jan 30 13:55:35.296227 containerd[1796]: time="2025-01-30T13:55:35.296064768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:6,}" Jan 30 13:55:35.296227 containerd[1796]: time="2025-01-30T13:55:35.296163218Z" level=info msg="TearDown network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\" successfully" Jan 30 13:55:35.296227 containerd[1796]: time="2025-01-30T13:55:35.296175010Z" level=info msg="StopPodSandbox for \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\" returns successfully" Jan 30 13:55:35.296320 containerd[1796]: time="2025-01-30T13:55:35.296306685Z" level=info msg="StopPodSandbox for \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\"" Jan 30 13:55:35.296361 containerd[1796]: time="2025-01-30T13:55:35.296350519Z" level=info msg="TearDown network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" successfully" Jan 30 13:55:35.296383 containerd[1796]: time="2025-01-30T13:55:35.296361126Z" level=info msg="StopPodSandbox for \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" returns successfully" Jan 30 13:55:35.296502 containerd[1796]: time="2025-01-30T13:55:35.296491820Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\"" Jan 30 13:55:35.296548 containerd[1796]: time="2025-01-30T13:55:35.296537652Z" level=info msg="TearDown network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" successfully" Jan 30 13:55:35.296548 containerd[1796]: time="2025-01-30T13:55:35.296546243Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" returns successfully" Jan 30 13:55:35.296679 containerd[1796]: time="2025-01-30T13:55:35.296665881Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" Jan 30 13:55:35.296731 containerd[1796]: time="2025-01-30T13:55:35.296718110Z" level=info msg="TearDown network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" successfully" Jan 30 13:55:35.296893 containerd[1796]: time="2025-01-30T13:55:35.296730192Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" returns successfully" Jan 30 13:55:35.296893 containerd[1796]: time="2025-01-30T13:55:35.296829291Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:55:35.296893 containerd[1796]: time="2025-01-30T13:55:35.296865245Z" level=info msg="TearDown network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" successfully" Jan 30 13:55:35.296893 containerd[1796]: time="2025-01-30T13:55:35.296871135Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" returns successfully" Jan 30 13:55:35.296996 kubelet[3109]: I0130 13:55:35.296802 3109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5" Jan 30 13:55:35.297028 containerd[1796]: time="2025-01-30T13:55:35.296975099Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:55:35.297028 containerd[1796]: time="2025-01-30T13:55:35.297008730Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:55:35.297028 containerd[1796]: time="2025-01-30T13:55:35.297014188Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:55:35.297083 containerd[1796]: time="2025-01-30T13:55:35.297071801Z" level=info msg="StopPodSandbox for \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\"" Jan 30 13:55:35.297202 containerd[1796]: time="2025-01-30T13:55:35.297190626Z" level=info msg="Ensure that sandbox 3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5 in task-service has been cleanup successfully" Jan 30 13:55:35.297243 containerd[1796]: time="2025-01-30T13:55:35.297230601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:6,}" Jan 30 13:55:35.297314 containerd[1796]: time="2025-01-30T13:55:35.297299125Z" level=info msg="TearDown network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\" successfully" Jan 30 13:55:35.297335 containerd[1796]: time="2025-01-30T13:55:35.297314968Z" level=info msg="StopPodSandbox for \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\" returns successfully" Jan 30 13:55:35.297438 containerd[1796]: time="2025-01-30T13:55:35.297426811Z" level=info msg="StopPodSandbox for \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\"" Jan 30 13:55:35.297491 containerd[1796]: time="2025-01-30T13:55:35.297482541Z" level=info msg="TearDown network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" successfully" Jan 30 13:55:35.297511 containerd[1796]: time="2025-01-30T13:55:35.297492745Z" level=info msg="StopPodSandbox for \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" returns successfully" Jan 30 13:55:35.297610 containerd[1796]: time="2025-01-30T13:55:35.297596622Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\"" Jan 30 13:55:35.297661 containerd[1796]: time="2025-01-30T13:55:35.297652299Z" level=info msg="TearDown network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" successfully" Jan 30 13:55:35.297696 containerd[1796]: time="2025-01-30T13:55:35.297660993Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" returns successfully" Jan 30 13:55:35.297819 containerd[1796]: time="2025-01-30T13:55:35.297806192Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" Jan 30 13:55:35.297870 containerd[1796]: time="2025-01-30T13:55:35.297859495Z" level=info msg="TearDown network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" successfully" Jan 30 13:55:35.297895 containerd[1796]: time="2025-01-30T13:55:35.297870314Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" returns successfully" Jan 30 13:55:35.298055 containerd[1796]: time="2025-01-30T13:55:35.298045532Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:55:35.298107 containerd[1796]: time="2025-01-30T13:55:35.298093940Z" level=info msg="TearDown network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" successfully" Jan 30 13:55:35.298107 containerd[1796]: time="2025-01-30T13:55:35.298104927Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" returns successfully" Jan 30 13:55:35.298236 containerd[1796]: time="2025-01-30T13:55:35.298224937Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:55:35.298274 containerd[1796]: time="2025-01-30T13:55:35.298267040Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:55:35.298300 containerd[1796]: time="2025-01-30T13:55:35.298273594Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:55:35.298497 containerd[1796]: time="2025-01-30T13:55:35.298487915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:6,}" Jan 30 13:55:35.318635 kubelet[3109]: I0130 13:55:35.318220 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q69b5" podStartSLOduration=0.667314533 podStartE2EDuration="14.31820565s" podCreationTimestamp="2025-01-30 13:55:21 +0000 UTC" firstStartedPulling="2025-01-30 13:55:21.437060855 +0000 UTC m=+12.310240808" lastFinishedPulling="2025-01-30 13:55:35.087951978 +0000 UTC m=+25.961131925" observedRunningTime="2025-01-30 13:55:35.318033261 +0000 UTC m=+26.191213209" watchObservedRunningTime="2025-01-30 13:55:35.31820565 +0000 UTC m=+26.191385598" Jan 30 13:55:35.581881 systemd-networkd[1715]: calif035dc895ac: Link UP Jan 30 13:55:35.582418 systemd-networkd[1715]: calif035dc895ac: Gained carrier Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.319 [INFO][5811] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.326 [INFO][5811] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0 calico-apiserver-57746584d- calico-apiserver 111d62eb-6a22-4903-83b5-b0f05dac736f 660 0 2025-01-30 13:55:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57746584d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-fe6ab79c24 calico-apiserver-57746584d-z55mr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif035dc895ac [] []}} ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.326 [INFO][5811] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.343 [INFO][5902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" HandleID="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.426 [INFO][5902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" HandleID="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f96b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-fe6ab79c24", "pod":"calico-apiserver-57746584d-z55mr", "timestamp":"2025-01-30 13:55:35.343616598 +0000 UTC"}, Hostname:"ci-4186.1.0-a-fe6ab79c24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.426 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5902] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-fe6ab79c24' Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.431 [INFO][5902] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.522 [INFO][5902] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.532 [INFO][5902] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.536 [INFO][5902] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.540 [INFO][5902] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.540 [INFO][5902] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.543 [INFO][5902] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6 Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.549 [INFO][5902] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.559 [INFO][5902] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.193/26] block=192.168.19.192/26 handle="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.559 [INFO][5902] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.193/26] handle="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.559 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:35.595741 containerd[1796]: 2025-01-30 13:55:35.559 [INFO][5902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.193/26] IPv6=[] ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" HandleID="k8s-pod-network.bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" Jan 30 13:55:35.596739 containerd[1796]: 2025-01-30 13:55:35.566 [INFO][5811] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0", GenerateName:"calico-apiserver-57746584d-", Namespace:"calico-apiserver", SelfLink:"", UID:"111d62eb-6a22-4903-83b5-b0f05dac736f", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57746584d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"", Pod:"calico-apiserver-57746584d-z55mr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif035dc895ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.596739 containerd[1796]: 2025-01-30 13:55:35.566 [INFO][5811] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.193/32] ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" Jan 30 13:55:35.596739 containerd[1796]: 2025-01-30 13:55:35.567 [INFO][5811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif035dc895ac ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" Jan 30 13:55:35.596739 containerd[1796]: 2025-01-30 13:55:35.582 [INFO][5811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" Jan 30 13:55:35.596739 containerd[1796]: 2025-01-30 13:55:35.582 [INFO][5811] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0", GenerateName:"calico-apiserver-57746584d-", Namespace:"calico-apiserver", SelfLink:"", UID:"111d62eb-6a22-4903-83b5-b0f05dac736f", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57746584d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6", Pod:"calico-apiserver-57746584d-z55mr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif035dc895ac", MAC:"fe:07:92:87:4e:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.596739 containerd[1796]: 2025-01-30 13:55:35.593 [INFO][5811] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-z55mr" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--z55mr-eth0" Jan 30 13:55:35.607499 containerd[1796]: time="2025-01-30T13:55:35.607417162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:35.607499 containerd[1796]: time="2025-01-30T13:55:35.607464259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:35.607499 containerd[1796]: time="2025-01-30T13:55:35.607471135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.607628 containerd[1796]: time="2025-01-30T13:55:35.607509856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.626779 systemd[1]: Started cri-containerd-bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6.scope - libcontainer container bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6. Jan 30 13:55:35.637733 systemd-networkd[1715]: calia0eea13e8b8: Link UP Jan 30 13:55:35.637860 systemd-networkd[1715]: calia0eea13e8b8: Gained carrier Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.322 [INFO][5829] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.328 [INFO][5829] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0 coredns-668d6bf9bc- kube-system f6858660-a650-44b5-8920-2ec81bb1b138 654 0 2025-01-30 13:55:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-fe6ab79c24 coredns-668d6bf9bc-k9ngp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia0eea13e8b8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.328 [INFO][5829] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.344 [INFO][5918] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" HandleID="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.426 [INFO][5918] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" HandleID="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000299e70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-fe6ab79c24", "pod":"coredns-668d6bf9bc-k9ngp", "timestamp":"2025-01-30 13:55:35.344986527 +0000 UTC"}, Hostname:"ci-4186.1.0-a-fe6ab79c24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.559 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.559 [INFO][5918] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-fe6ab79c24' Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.563 [INFO][5918] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.621 [INFO][5918] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.628 [INFO][5918] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.629 [INFO][5918] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.630 [INFO][5918] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.630 [INFO][5918] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.631 [INFO][5918] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8 Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.633 [INFO][5918] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.636 [INFO][5918] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.194/26] block=192.168.19.192/26 handle="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.636 [INFO][5918] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.194/26] handle="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.636 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:35.643152 containerd[1796]: 2025-01-30 13:55:35.636 [INFO][5918] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.194/26] IPv6=[] ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" HandleID="k8s-pod-network.3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" Jan 30 13:55:35.643577 containerd[1796]: 2025-01-30 13:55:35.636 [INFO][5829] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6858660-a650-44b5-8920-2ec81bb1b138", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"", Pod:"coredns-668d6bf9bc-k9ngp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0eea13e8b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.643577 containerd[1796]: 2025-01-30 13:55:35.637 [INFO][5829] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.194/32] ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" Jan 30 13:55:35.643577 containerd[1796]: 2025-01-30 13:55:35.637 [INFO][5829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0eea13e8b8 ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" Jan 30 13:55:35.643577 containerd[1796]: 2025-01-30 13:55:35.637 [INFO][5829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" Jan 30 13:55:35.643577 containerd[1796]: 2025-01-30 13:55:35.637 [INFO][5829] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6858660-a650-44b5-8920-2ec81bb1b138", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8", Pod:"coredns-668d6bf9bc-k9ngp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0eea13e8b8", MAC:"3e:af:44:fa:5d:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.643577 containerd[1796]: 2025-01-30 13:55:35.642 [INFO][5829] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k9ngp" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--k9ngp-eth0" Jan 30 13:55:35.649966 containerd[1796]: time="2025-01-30T13:55:35.649939323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-z55mr,Uid:111d62eb-6a22-4903-83b5-b0f05dac736f,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6\"" Jan 30 13:55:35.650644 containerd[1796]: time="2025-01-30T13:55:35.650633060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:55:35.652386 containerd[1796]: time="2025-01-30T13:55:35.652350430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:35.652386 containerd[1796]: time="2025-01-30T13:55:35.652381267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:35.652473 containerd[1796]: time="2025-01-30T13:55:35.652389185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.652473 containerd[1796]: time="2025-01-30T13:55:35.652454379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.682711 systemd[1]: Started cri-containerd-3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8.scope - libcontainer container 3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8. Jan 30 13:55:35.707548 containerd[1796]: time="2025-01-30T13:55:35.707494662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9ngp,Uid:f6858660-a650-44b5-8920-2ec81bb1b138,Namespace:kube-system,Attempt:6,} returns sandbox id \"3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8\"" Jan 30 13:55:35.708787 containerd[1796]: time="2025-01-30T13:55:35.708771360Z" level=info msg="CreateContainer within sandbox \"3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:55:35.713286 containerd[1796]: time="2025-01-30T13:55:35.713272966Z" level=info msg="CreateContainer within sandbox \"3af32cfa60c2043e847c26eb7039a25c44082a482565af3d5285a572368914d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65092f7750a79c16b6916dd33ca84d7b42a534922f83427cad3e293f3c6f24f7\"" Jan 30 13:55:35.713496 containerd[1796]: time="2025-01-30T13:55:35.713461765Z" level=info msg="StartContainer for \"65092f7750a79c16b6916dd33ca84d7b42a534922f83427cad3e293f3c6f24f7\"" Jan 30 13:55:35.737586 systemd[1]: Started cri-containerd-65092f7750a79c16b6916dd33ca84d7b42a534922f83427cad3e293f3c6f24f7.scope - libcontainer container 65092f7750a79c16b6916dd33ca84d7b42a534922f83427cad3e293f3c6f24f7. Jan 30 13:55:35.739197 systemd-networkd[1715]: calie3d6daca681: Link UP Jan 30 13:55:35.739323 systemd-networkd[1715]: calie3d6daca681: Gained carrier Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.322 [INFO][5825] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.328 [INFO][5825] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0 calico-kube-controllers-57994df9cf- calico-system 8e5acc49-b227-4ac4-a04e-929de29daecb 657 0 2025-01-30 13:55:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57994df9cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.0-a-fe6ab79c24 calico-kube-controllers-57994df9cf-cnldt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie3d6daca681 [] []}} ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.328 [INFO][5825] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.345 [INFO][5911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" HandleID="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" HandleID="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-fe6ab79c24", "pod":"calico-kube-controllers-57994df9cf-cnldt", "timestamp":"2025-01-30 13:55:35.345356758 +0000 UTC"}, Hostname:"ci-4186.1.0-a-fe6ab79c24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.636 [INFO][5911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.636 [INFO][5911] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-fe6ab79c24' Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.663 [INFO][5911] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.721 [INFO][5911] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.729 [INFO][5911] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.730 [INFO][5911] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.731 [INFO][5911] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.731 [INFO][5911] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.732 [INFO][5911] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35 Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.735 [INFO][5911] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.737 [INFO][5911] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.195/26] block=192.168.19.192/26 handle="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.737 [INFO][5911] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.195/26] handle="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.737 [INFO][5911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:35.745048 containerd[1796]: 2025-01-30 13:55:35.737 [INFO][5911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.195/26] IPv6=[] ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" HandleID="k8s-pod-network.dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" Jan 30 13:55:35.745645 containerd[1796]: 2025-01-30 13:55:35.738 [INFO][5825] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0", GenerateName:"calico-kube-controllers-57994df9cf-", Namespace:"calico-system", SelfLink:"", UID:"8e5acc49-b227-4ac4-a04e-929de29daecb", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57994df9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"", Pod:"calico-kube-controllers-57994df9cf-cnldt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3d6daca681", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.745645 containerd[1796]: 2025-01-30 13:55:35.738 [INFO][5825] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.195/32] ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" Jan 30 13:55:35.745645 containerd[1796]: 2025-01-30 13:55:35.738 [INFO][5825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3d6daca681 ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" Jan 30 13:55:35.745645 containerd[1796]: 2025-01-30 13:55:35.739 [INFO][5825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" Jan 30 13:55:35.745645 containerd[1796]: 2025-01-30 13:55:35.739 [INFO][5825] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0", GenerateName:"calico-kube-controllers-57994df9cf-", Namespace:"calico-system", SelfLink:"", UID:"8e5acc49-b227-4ac4-a04e-929de29daecb", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57994df9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35", Pod:"calico-kube-controllers-57994df9cf-cnldt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3d6daca681", MAC:"72:f6:03:fb:24:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.745645 containerd[1796]: 2025-01-30 13:55:35.744 [INFO][5825] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35" Namespace="calico-system" Pod="calico-kube-controllers-57994df9cf-cnldt" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--kube--controllers--57994df9cf--cnldt-eth0" Jan 30 13:55:35.751911 containerd[1796]: time="2025-01-30T13:55:35.751889275Z" level=info msg="StartContainer for \"65092f7750a79c16b6916dd33ca84d7b42a534922f83427cad3e293f3c6f24f7\" returns successfully" Jan 30 13:55:35.755290 containerd[1796]: time="2025-01-30T13:55:35.755238926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:35.755290 containerd[1796]: time="2025-01-30T13:55:35.755273217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:35.755290 containerd[1796]: time="2025-01-30T13:55:35.755281124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.755448 containerd[1796]: time="2025-01-30T13:55:35.755327339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.779974 systemd[1]: Started cri-containerd-dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35.scope - libcontainer container dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35. Jan 30 13:55:35.842274 systemd[1]: run-netns-cni\x2df9ef2483\x2d5f6c\x2dd784\x2db55b\x2d45280f89f656.mount: Deactivated successfully. Jan 30 13:55:35.842336 systemd[1]: run-netns-cni\x2d32ead49c\x2d575b\x2d6d02\x2d0acd\x2dcd5e3da89e59.mount: Deactivated successfully. Jan 30 13:55:35.842378 systemd[1]: run-netns-cni\x2dba0e7d6d\x2d0ca5\x2dcac8\x2da5f8\x2d188982135407.mount: Deactivated successfully. Jan 30 13:55:35.842415 systemd[1]: run-netns-cni\x2d0cde40e1\x2ddbb1\x2d7113\x2d2a31\x2df2eddd339dc8.mount: Deactivated successfully. Jan 30 13:55:35.842463 systemd[1]: run-netns-cni\x2d1e009e9f\x2d1e22\x2d68d8\x2d4e02\x2d5e3b89f5cddd.mount: Deactivated successfully. Jan 30 13:55:35.842502 systemd[1]: run-netns-cni\x2d5e9957f9\x2d3aa5\x2d14a6\x2d35c0\x2d69847b036a81.mount: Deactivated successfully. Jan 30 13:55:35.846174 systemd-networkd[1715]: cali6663c69b0f1: Link UP Jan 30 13:55:35.846329 systemd-networkd[1715]: cali6663c69b0f1: Gained carrier Jan 30 13:55:35.848641 containerd[1796]: time="2025-01-30T13:55:35.848619067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57994df9cf-cnldt,Uid:8e5acc49-b227-4ac4-a04e-929de29daecb,Namespace:calico-system,Attempt:6,} returns sandbox id \"dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35\"" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.313 [INFO][5777] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.321 [INFO][5777] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0 coredns-668d6bf9bc- kube-system 07922a7a-83b4-4d16-85d7-30bdc2b6b793 659 0 2025-01-30 13:55:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-fe6ab79c24 coredns-668d6bf9bc-2f94w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6663c69b0f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.321 [INFO][5777] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.338 [INFO][5878] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" HandleID="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5878] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" HandleID="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003671b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-fe6ab79c24", "pod":"coredns-668d6bf9bc-2f94w", "timestamp":"2025-01-30 13:55:35.338056257 +0000 UTC"}, Hostname:"ci-4186.1.0-a-fe6ab79c24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.737 [INFO][5878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.737 [INFO][5878] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-fe6ab79c24' Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.764 [INFO][5878] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.824 [INFO][5878] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.832 [INFO][5878] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.834 [INFO][5878] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.836 [INFO][5878] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.836 [INFO][5878] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.837 [INFO][5878] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777 Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.840 [INFO][5878] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.844 [INFO][5878] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.196/26] block=192.168.19.192/26 handle="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.844 [INFO][5878] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.196/26] handle="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.844 [INFO][5878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:35.852051 containerd[1796]: 2025-01-30 13:55:35.844 [INFO][5878] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.196/26] IPv6=[] ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" HandleID="k8s-pod-network.e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" Jan 30 13:55:35.852480 containerd[1796]: 2025-01-30 13:55:35.845 [INFO][5777] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"07922a7a-83b4-4d16-85d7-30bdc2b6b793", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"", Pod:"coredns-668d6bf9bc-2f94w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6663c69b0f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.852480 containerd[1796]: 2025-01-30 13:55:35.845 [INFO][5777] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.196/32] ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" Jan 30 13:55:35.852480 containerd[1796]: 2025-01-30 13:55:35.845 [INFO][5777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6663c69b0f1 ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" Jan 30 13:55:35.852480 containerd[1796]: 2025-01-30 13:55:35.846 [INFO][5777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" Jan 30 13:55:35.852480 containerd[1796]: 2025-01-30 13:55:35.846 [INFO][5777] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"07922a7a-83b4-4d16-85d7-30bdc2b6b793", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777", Pod:"coredns-668d6bf9bc-2f94w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6663c69b0f1", MAC:"3a:d3:60:5a:14:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.852480 containerd[1796]: 2025-01-30 13:55:35.851 [INFO][5777] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777" Namespace="kube-system" Pod="coredns-668d6bf9bc-2f94w" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-coredns--668d6bf9bc--2f94w-eth0" Jan 30 13:55:35.861113 containerd[1796]: time="2025-01-30T13:55:35.861070621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:35.861300 containerd[1796]: time="2025-01-30T13:55:35.861285482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:35.861300 containerd[1796]: time="2025-01-30T13:55:35.861295643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.861359 containerd[1796]: time="2025-01-30T13:55:35.861349467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.885771 systemd[1]: Started cri-containerd-e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777.scope - libcontainer container e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777. Jan 30 13:55:35.951471 systemd-networkd[1715]: cali773268012ea: Link UP Jan 30 13:55:35.951662 systemd-networkd[1715]: cali773268012ea: Gained carrier Jan 30 13:55:35.957563 containerd[1796]: time="2025-01-30T13:55:35.957523448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2f94w,Uid:07922a7a-83b4-4d16-85d7-30bdc2b6b793,Namespace:kube-system,Attempt:6,} returns sandbox id \"e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777\"" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.315 [INFO][5788] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.322 [INFO][5788] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0 csi-node-driver- calico-system b153ff53-b790-4ffe-82ac-a800a8f52eef 572 0 2025-01-30 13:55:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.0-a-fe6ab79c24 csi-node-driver-jnwwl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali773268012ea [] []}} ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.322 [INFO][5788] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.341 [INFO][5889] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" HandleID="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.427 [INFO][5889] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" HandleID="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132a50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-fe6ab79c24", "pod":"csi-node-driver-jnwwl", "timestamp":"2025-01-30 13:55:35.34191733 +0000 UTC"}, Hostname:"ci-4186.1.0-a-fe6ab79c24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.428 [INFO][5889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.844 [INFO][5889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.844 [INFO][5889] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-fe6ab79c24' Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.864 [INFO][5889] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.924 [INFO][5889] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.933 [INFO][5889] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.936 [INFO][5889] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.938 [INFO][5889] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.938 [INFO][5889] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.940 [INFO][5889] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95 Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.944 [INFO][5889] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.948 [INFO][5889] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.197/26] block=192.168.19.192/26 handle="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.949 [INFO][5889] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.197/26] handle="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.949 [INFO][5889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:35.958709 containerd[1796]: 2025-01-30 13:55:35.949 [INFO][5889] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.197/26] IPv6=[] ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" HandleID="k8s-pod-network.6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" Jan 30 13:55:35.959258 containerd[1796]: 2025-01-30 13:55:35.950 [INFO][5788] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b153ff53-b790-4ffe-82ac-a800a8f52eef", ResourceVersion:"572", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"", Pod:"csi-node-driver-jnwwl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali773268012ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.959258 containerd[1796]: 2025-01-30 13:55:35.950 [INFO][5788] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.197/32] ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" Jan 30 13:55:35.959258 containerd[1796]: 2025-01-30 13:55:35.950 [INFO][5788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali773268012ea ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" Jan 30 13:55:35.959258 containerd[1796]: 2025-01-30 13:55:35.951 [INFO][5788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" Jan 30 13:55:35.959258 containerd[1796]: 2025-01-30 13:55:35.951 [INFO][5788] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b153ff53-b790-4ffe-82ac-a800a8f52eef", ResourceVersion:"572", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95", Pod:"csi-node-driver-jnwwl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali773268012ea", MAC:"d2:51:5e:34:5b:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:35.959258 containerd[1796]: 2025-01-30 13:55:35.957 [INFO][5788] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95" Namespace="calico-system" Pod="csi-node-driver-jnwwl" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-csi--node--driver--jnwwl-eth0" Jan 30 13:55:35.959474 containerd[1796]: time="2025-01-30T13:55:35.959329827Z" level=info msg="CreateContainer within sandbox \"e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:55:35.965503 containerd[1796]: time="2025-01-30T13:55:35.965453995Z" level=info msg="CreateContainer within sandbox \"e53d64747d74799d9c3ed2fc4a81fe4b20dcc0d062b8684a19e7da41fa47b777\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"315825ef8d69aed3065e67ed9a6c6bfada7c51994c43f5ad071127c32e102200\"" Jan 30 13:55:35.965796 containerd[1796]: time="2025-01-30T13:55:35.965782118Z" level=info msg="StartContainer for \"315825ef8d69aed3065e67ed9a6c6bfada7c51994c43f5ad071127c32e102200\"" Jan 30 13:55:35.968385 containerd[1796]: time="2025-01-30T13:55:35.968336218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:35.968385 containerd[1796]: time="2025-01-30T13:55:35.968370698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:35.968385 containerd[1796]: time="2025-01-30T13:55:35.968378050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.968525 containerd[1796]: time="2025-01-30T13:55:35.968420677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:35.983604 systemd[1]: Started cri-containerd-6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95.scope - libcontainer container 6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95. Jan 30 13:55:35.984958 systemd[1]: Started cri-containerd-315825ef8d69aed3065e67ed9a6c6bfada7c51994c43f5ad071127c32e102200.scope - libcontainer container 315825ef8d69aed3065e67ed9a6c6bfada7c51994c43f5ad071127c32e102200. Jan 30 13:55:35.994248 containerd[1796]: time="2025-01-30T13:55:35.994225077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnwwl,Uid:b153ff53-b790-4ffe-82ac-a800a8f52eef,Namespace:calico-system,Attempt:5,} returns sandbox id \"6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95\"" Jan 30 13:55:35.996815 containerd[1796]: time="2025-01-30T13:55:35.996795610Z" level=info msg="StartContainer for \"315825ef8d69aed3065e67ed9a6c6bfada7c51994c43f5ad071127c32e102200\" returns successfully" Jan 30 13:55:36.051876 systemd-networkd[1715]: calidad4d70742a: Link UP Jan 30 13:55:36.051980 systemd-networkd[1715]: calidad4d70742a: Gained carrier Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.306 [INFO][5761] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.320 [INFO][5761] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0 calico-apiserver-57746584d- calico-apiserver 378b5e35-c026-4506-a582-fb431d551682 658 0 2025-01-30 13:55:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57746584d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-fe6ab79c24 calico-apiserver-57746584d-r5fdn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidad4d70742a [] []}} ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.320 [INFO][5761] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.337 [INFO][5875] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" HandleID="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.428 [INFO][5875] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" HandleID="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367b90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-fe6ab79c24", "pod":"calico-apiserver-57746584d-r5fdn", "timestamp":"2025-01-30 13:55:35.337728307 +0000 UTC"}, Hostname:"ci-4186.1.0-a-fe6ab79c24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.428 [INFO][5875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.949 [INFO][5875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.949 [INFO][5875] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-fe6ab79c24' Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:35.964 [INFO][5875] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.024 [INFO][5875] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.034 [INFO][5875] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.038 [INFO][5875] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.042 [INFO][5875] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.043 [INFO][5875] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.044 [INFO][5875] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.046 [INFO][5875] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.050 [INFO][5875] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.198/26] block=192.168.19.192/26 handle="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.050 [INFO][5875] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.198/26] handle="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" host="ci-4186.1.0-a-fe6ab79c24" Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.050 [INFO][5875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:36.057052 containerd[1796]: 2025-01-30 13:55:36.050 [INFO][5875] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.198/26] IPv6=[] ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" HandleID="k8s-pod-network.ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Workload="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" Jan 30 13:55:36.057480 containerd[1796]: 2025-01-30 13:55:36.051 [INFO][5761] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0", GenerateName:"calico-apiserver-57746584d-", Namespace:"calico-apiserver", SelfLink:"", UID:"378b5e35-c026-4506-a582-fb431d551682", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57746584d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"", Pod:"calico-apiserver-57746584d-r5fdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidad4d70742a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:36.057480 containerd[1796]: 2025-01-30 13:55:36.051 [INFO][5761] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.198/32] ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" Jan 30 13:55:36.057480 containerd[1796]: 2025-01-30 13:55:36.051 [INFO][5761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidad4d70742a ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" Jan 30 13:55:36.057480 containerd[1796]: 2025-01-30 13:55:36.051 [INFO][5761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" Jan 30 13:55:36.057480 containerd[1796]: 2025-01-30 13:55:36.052 [INFO][5761] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0", GenerateName:"calico-apiserver-57746584d-", Namespace:"calico-apiserver", SelfLink:"", UID:"378b5e35-c026-4506-a582-fb431d551682", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57746584d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-fe6ab79c24", ContainerID:"ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc", Pod:"calico-apiserver-57746584d-r5fdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidad4d70742a", MAC:"3e:e9:6f:0f:76:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:36.057480 containerd[1796]: 2025-01-30 13:55:36.056 [INFO][5761] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc" Namespace="calico-apiserver" Pod="calico-apiserver-57746584d-r5fdn" WorkloadEndpoint="ci--4186.1.0--a--fe6ab79c24-k8s-calico--apiserver--57746584d--r5fdn-eth0" Jan 30 13:55:36.066747 containerd[1796]: time="2025-01-30T13:55:36.066488936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:36.066747 containerd[1796]: time="2025-01-30T13:55:36.066709856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:36.066747 containerd[1796]: time="2025-01-30T13:55:36.066717352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:36.066861 containerd[1796]: time="2025-01-30T13:55:36.066759640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:36.083672 systemd[1]: Started cri-containerd-ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc.scope - libcontainer container ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc. Jan 30 13:55:36.110346 containerd[1796]: time="2025-01-30T13:55:36.110279719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57746584d-r5fdn,Uid:378b5e35-c026-4506-a582-fb431d551682,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc\"" Jan 30 13:55:36.335334 kubelet[3109]: I0130 13:55:36.335212 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2f94w" podStartSLOduration=21.335176506 podStartE2EDuration="21.335176506s" podCreationTimestamp="2025-01-30 13:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:36.334570026 +0000 UTC m=+27.207750057" watchObservedRunningTime="2025-01-30 13:55:36.335176506 +0000 UTC m=+27.208356506" Jan 30 13:55:36.340632 kubelet[3109]: I0130 13:55:36.340579 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:36.371597 kubelet[3109]: I0130 13:55:36.371419 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k9ngp" podStartSLOduration=21.371393951 podStartE2EDuration="21.371393951s" podCreationTimestamp="2025-01-30 13:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:36.370747802 +0000 UTC m=+27.243927771" watchObservedRunningTime="2025-01-30 13:55:36.371393951 +0000 UTC m=+27.244573917" Jan 30 13:55:36.669432 kernel: bpftool[6549]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:55:36.816992 systemd-networkd[1715]: vxlan.calico: Link UP Jan 30 13:55:36.816996 systemd-networkd[1715]: vxlan.calico: Gained carrier Jan 30 13:55:36.842569 systemd-networkd[1715]: calif035dc895ac: Gained IPv6LL Jan 30 13:55:37.098680 systemd-networkd[1715]: calie3d6daca681: Gained IPv6LL Jan 30 13:55:37.226624 systemd-networkd[1715]: calidad4d70742a: Gained IPv6LL Jan 30 13:55:37.290714 systemd-networkd[1715]: cali773268012ea: Gained IPv6LL Jan 30 13:55:37.418608 systemd-networkd[1715]: calia0eea13e8b8: Gained IPv6LL Jan 30 13:55:37.802539 systemd-networkd[1715]: cali6663c69b0f1: Gained IPv6LL Jan 30 13:55:37.931681 containerd[1796]: time="2025-01-30T13:55:37.931625121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.931873 containerd[1796]: time="2025-01-30T13:55:37.931830712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:55:37.932173 containerd[1796]: time="2025-01-30T13:55:37.932132664Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.933475 containerd[1796]: time="2025-01-30T13:55:37.933449261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.933768 containerd[1796]: time="2025-01-30T13:55:37.933726400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.283079703s" Jan 30 13:55:37.933768 containerd[1796]: time="2025-01-30T13:55:37.933740115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:55:37.934265 containerd[1796]: time="2025-01-30T13:55:37.934252622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:55:37.934735 containerd[1796]: time="2025-01-30T13:55:37.934721383Z" level=info msg="CreateContainer within sandbox \"bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:55:37.938683 containerd[1796]: time="2025-01-30T13:55:37.938666635Z" level=info msg="CreateContainer within sandbox \"bb582cd7ce6559b9ca481661a5d3961abad345e94a76835bdc20575f7a3fe5e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1fb6c6d54d3de034d80c89a39f7f59ba73d08003b9ad07f535f3624ae48b7ce7\"" Jan 30 13:55:37.938899 containerd[1796]: time="2025-01-30T13:55:37.938884587Z" level=info msg="StartContainer for \"1fb6c6d54d3de034d80c89a39f7f59ba73d08003b9ad07f535f3624ae48b7ce7\"" Jan 30 13:55:37.970711 systemd[1]: Started cri-containerd-1fb6c6d54d3de034d80c89a39f7f59ba73d08003b9ad07f535f3624ae48b7ce7.scope - libcontainer container 1fb6c6d54d3de034d80c89a39f7f59ba73d08003b9ad07f535f3624ae48b7ce7. Jan 30 13:55:37.995511 systemd-networkd[1715]: vxlan.calico: Gained IPv6LL Jan 30 13:55:37.997098 containerd[1796]: time="2025-01-30T13:55:37.997076862Z" level=info msg="StartContainer for \"1fb6c6d54d3de034d80c89a39f7f59ba73d08003b9ad07f535f3624ae48b7ce7\" returns successfully" Jan 30 13:55:38.351120 kubelet[3109]: I0130 13:55:38.351080 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57746584d-z55mr" podStartSLOduration=16.067431138 podStartE2EDuration="18.35106619s" podCreationTimestamp="2025-01-30 13:55:20 +0000 UTC" firstStartedPulling="2025-01-30 13:55:35.650525436 +0000 UTC m=+26.523705384" lastFinishedPulling="2025-01-30 13:55:37.934160489 +0000 UTC m=+28.807340436" observedRunningTime="2025-01-30 13:55:38.350729149 +0000 UTC m=+29.223909100" watchObservedRunningTime="2025-01-30 13:55:38.35106619 +0000 UTC m=+29.224246153" Jan 30 13:55:38.480165 kubelet[3109]: I0130 13:55:38.480100 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:39.348169 kubelet[3109]: I0130 13:55:39.348097 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:40.125287 containerd[1796]: time="2025-01-30T13:55:40.125233169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.125521 containerd[1796]: time="2025-01-30T13:55:40.125394881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:55:40.125853 containerd[1796]: time="2025-01-30T13:55:40.125834463Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.126786 containerd[1796]: time="2025-01-30T13:55:40.126747148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.127186 containerd[1796]: time="2025-01-30T13:55:40.127143214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.192875153s" Jan 30 13:55:40.127186 containerd[1796]: time="2025-01-30T13:55:40.127158870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:55:40.127669 containerd[1796]: time="2025-01-30T13:55:40.127633434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:55:40.130511 containerd[1796]: time="2025-01-30T13:55:40.130493912Z" level=info msg="CreateContainer within sandbox \"dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:55:40.137663 containerd[1796]: time="2025-01-30T13:55:40.137619828Z" level=info msg="CreateContainer within sandbox \"dcd0d1da29954edcf460ec3fdf426a0cb365a555622483c30664a76200f3bc35\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f26a33a746c0945dd203e7db2d39f9cd7d6c3f237b12d5e8e3872dfbaec0652a\"" Jan 30 13:55:40.137871 containerd[1796]: time="2025-01-30T13:55:40.137829813Z" level=info msg="StartContainer for \"f26a33a746c0945dd203e7db2d39f9cd7d6c3f237b12d5e8e3872dfbaec0652a\"" Jan 30 13:55:40.172832 systemd[1]: Started cri-containerd-f26a33a746c0945dd203e7db2d39f9cd7d6c3f237b12d5e8e3872dfbaec0652a.scope - libcontainer container f26a33a746c0945dd203e7db2d39f9cd7d6c3f237b12d5e8e3872dfbaec0652a. Jan 30 13:55:40.248310 containerd[1796]: time="2025-01-30T13:55:40.248283839Z" level=info msg="StartContainer for \"f26a33a746c0945dd203e7db2d39f9cd7d6c3f237b12d5e8e3872dfbaec0652a\" returns successfully" Jan 30 13:55:40.379296 kubelet[3109]: I0130 13:55:40.379049 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57994df9cf-cnldt" podStartSLOduration=15.100670353 podStartE2EDuration="19.379016458s" podCreationTimestamp="2025-01-30 13:55:21 +0000 UTC" firstStartedPulling="2025-01-30 13:55:35.849239497 +0000 UTC m=+26.722419445" lastFinishedPulling="2025-01-30 13:55:40.127585602 +0000 UTC m=+31.000765550" observedRunningTime="2025-01-30 13:55:40.378599664 +0000 UTC m=+31.251779691" watchObservedRunningTime="2025-01-30 13:55:40.379016458 +0000 UTC m=+31.252196450" Jan 30 13:55:41.361945 kubelet[3109]: I0130 13:55:41.361850 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:41.575723 containerd[1796]: time="2025-01-30T13:55:41.575666633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.576017 containerd[1796]: time="2025-01-30T13:55:41.575887799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:55:41.576327 containerd[1796]: time="2025-01-30T13:55:41.576283104Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.577257 containerd[1796]: time="2025-01-30T13:55:41.577215445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.577716 containerd[1796]: time="2025-01-30T13:55:41.577670524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.450021726s" Jan 30 13:55:41.577716 containerd[1796]: time="2025-01-30T13:55:41.577684708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:55:41.578298 containerd[1796]: time="2025-01-30T13:55:41.578264342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:55:41.579009 containerd[1796]: time="2025-01-30T13:55:41.578996565Z" level=info msg="CreateContainer within sandbox \"6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:55:41.587397 containerd[1796]: time="2025-01-30T13:55:41.587355620Z" level=info msg="CreateContainer within sandbox \"6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b3426914945e9b30f04be25e0cba2e38981b9d88308e411f960ec1cf2b1fc6e\"" Jan 30 13:55:41.587841 containerd[1796]: time="2025-01-30T13:55:41.587756860Z" level=info msg="StartContainer for \"9b3426914945e9b30f04be25e0cba2e38981b9d88308e411f960ec1cf2b1fc6e\"" Jan 30 13:55:41.621715 systemd[1]: Started cri-containerd-9b3426914945e9b30f04be25e0cba2e38981b9d88308e411f960ec1cf2b1fc6e.scope - libcontainer container 9b3426914945e9b30f04be25e0cba2e38981b9d88308e411f960ec1cf2b1fc6e. Jan 30 13:55:41.639344 containerd[1796]: time="2025-01-30T13:55:41.639287341Z" level=info msg="StartContainer for \"9b3426914945e9b30f04be25e0cba2e38981b9d88308e411f960ec1cf2b1fc6e\" returns successfully" Jan 30 13:55:41.946573 containerd[1796]: time="2025-01-30T13:55:41.946434005Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.946770 containerd[1796]: time="2025-01-30T13:55:41.946720811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:55:41.948014 containerd[1796]: time="2025-01-30T13:55:41.947973720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 369.693945ms" Jan 30 13:55:41.948014 containerd[1796]: time="2025-01-30T13:55:41.947988681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:55:41.948630 containerd[1796]: time="2025-01-30T13:55:41.948588089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:55:41.949361 containerd[1796]: time="2025-01-30T13:55:41.949348046Z" level=info msg="CreateContainer within sandbox \"ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:55:41.954191 containerd[1796]: time="2025-01-30T13:55:41.954148949Z" level=info msg="CreateContainer within sandbox \"ce50412f3b21811ea50c735d38fa6cc0d1db84d4735f4ab22002b5934706b7bc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"730226a3dcec549f69fc438c9a9b191290dc41d7f8d305f263bbbfb6783a5ce5\"" Jan 30 13:55:41.954419 containerd[1796]: time="2025-01-30T13:55:41.954408554Z" level=info msg="StartContainer for \"730226a3dcec549f69fc438c9a9b191290dc41d7f8d305f263bbbfb6783a5ce5\"" Jan 30 13:55:41.981666 systemd[1]: Started cri-containerd-730226a3dcec549f69fc438c9a9b191290dc41d7f8d305f263bbbfb6783a5ce5.scope - libcontainer container 730226a3dcec549f69fc438c9a9b191290dc41d7f8d305f263bbbfb6783a5ce5. Jan 30 13:55:42.006334 containerd[1796]: time="2025-01-30T13:55:42.006283624Z" level=info msg="StartContainer for \"730226a3dcec549f69fc438c9a9b191290dc41d7f8d305f263bbbfb6783a5ce5\" returns successfully" Jan 30 13:55:42.369948 kubelet[3109]: I0130 13:55:42.369901 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57746584d-r5fdn" podStartSLOduration=16.532391855 podStartE2EDuration="22.369885112s" podCreationTimestamp="2025-01-30 13:55:20 +0000 UTC" firstStartedPulling="2025-01-30 13:55:36.11093482 +0000 UTC m=+26.984114768" lastFinishedPulling="2025-01-30 13:55:41.948428075 +0000 UTC m=+32.821608025" observedRunningTime="2025-01-30 13:55:42.369722549 +0000 UTC m=+33.242902497" watchObservedRunningTime="2025-01-30 13:55:42.369885112 +0000 UTC m=+33.243065058" Jan 30 13:55:43.367763 kubelet[3109]: I0130 13:55:43.367712 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:43.469102 containerd[1796]: time="2025-01-30T13:55:43.469046420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.469298 containerd[1796]: time="2025-01-30T13:55:43.469273939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:55:43.469686 containerd[1796]: time="2025-01-30T13:55:43.469645687Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.471912 containerd[1796]: time="2025-01-30T13:55:43.471871605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.472198 containerd[1796]: time="2025-01-30T13:55:43.472159938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.523540166s" Jan 30 13:55:43.472198 containerd[1796]: time="2025-01-30T13:55:43.472175165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:55:43.473191 containerd[1796]: time="2025-01-30T13:55:43.473178169Z" level=info msg="CreateContainer within sandbox \"6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:55:43.480518 containerd[1796]: time="2025-01-30T13:55:43.480502558Z" level=info msg="CreateContainer within sandbox \"6f4a5966190163c5c759a515e081cf4d09cd1a5fc47b708538ab8b0d37953f95\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9c5a970a0d03ec23016d16e3c822899dd955290561a11faf6042258f875f819a\"" Jan 30 13:55:43.480780 containerd[1796]: time="2025-01-30T13:55:43.480767670Z" level=info msg="StartContainer for \"9c5a970a0d03ec23016d16e3c822899dd955290561a11faf6042258f875f819a\"" Jan 30 13:55:43.512627 systemd[1]: Started cri-containerd-9c5a970a0d03ec23016d16e3c822899dd955290561a11faf6042258f875f819a.scope - libcontainer container 9c5a970a0d03ec23016d16e3c822899dd955290561a11faf6042258f875f819a. Jan 30 13:55:43.532466 containerd[1796]: time="2025-01-30T13:55:43.532439041Z" level=info msg="StartContainer for \"9c5a970a0d03ec23016d16e3c822899dd955290561a11faf6042258f875f819a\" returns successfully" Jan 30 13:55:44.215071 kubelet[3109]: I0130 13:55:44.214999 3109 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:55:44.215071 kubelet[3109]: I0130 13:55:44.215040 3109 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:55:44.401630 kubelet[3109]: I0130 13:55:44.401516 3109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jnwwl" podStartSLOduration=15.923711669 podStartE2EDuration="23.401478163s" podCreationTimestamp="2025-01-30 13:55:21 +0000 UTC" firstStartedPulling="2025-01-30 13:55:35.994769044 +0000 UTC m=+26.867948992" lastFinishedPulling="2025-01-30 13:55:43.472535538 +0000 UTC m=+34.345715486" observedRunningTime="2025-01-30 13:55:44.401175502 +0000 UTC m=+35.274355551" watchObservedRunningTime="2025-01-30 13:55:44.401478163 +0000 UTC m=+35.274658159" Jan 30 13:55:47.138467 kubelet[3109]: I0130 13:55:47.138334 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:57.390023 kubelet[3109]: I0130 13:55:57.389913 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:58.957464 kubelet[3109]: I0130 13:55:58.957376 3109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:09.167800 containerd[1796]: time="2025-01-30T13:56:09.167745715Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:56:09.168127 containerd[1796]: time="2025-01-30T13:56:09.167818058Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:56:09.168127 containerd[1796]: time="2025-01-30T13:56:09.167826409Z" level=info msg="StopPodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:56:09.168127 containerd[1796]: time="2025-01-30T13:56:09.168015833Z" level=info msg="RemovePodSandbox for \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:56:09.168127 containerd[1796]: time="2025-01-30T13:56:09.168031834Z" level=info msg="Forcibly stopping sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\"" Jan 30 13:56:09.168127 containerd[1796]: time="2025-01-30T13:56:09.168069460Z" level=info msg="TearDown network for sandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" successfully" Jan 30 13:56:09.169822 containerd[1796]: time="2025-01-30T13:56:09.169782741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.169822 containerd[1796]: time="2025-01-30T13:56:09.169820823Z" level=info msg="RemovePodSandbox \"095816d48775bed194e3feaca6025323f881c2946524f718a61c16f7e166d3c5\" returns successfully" Jan 30 13:56:09.170195 containerd[1796]: time="2025-01-30T13:56:09.170121600Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:56:09.170295 containerd[1796]: time="2025-01-30T13:56:09.170221157Z" level=info msg="TearDown network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" successfully" Jan 30 13:56:09.170295 containerd[1796]: time="2025-01-30T13:56:09.170227377Z" level=info msg="StopPodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" returns successfully" Jan 30 13:56:09.170420 containerd[1796]: time="2025-01-30T13:56:09.170406213Z" level=info msg="RemovePodSandbox for \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:56:09.170491 containerd[1796]: time="2025-01-30T13:56:09.170427229Z" level=info msg="Forcibly stopping sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\"" Jan 30 13:56:09.170532 containerd[1796]: time="2025-01-30T13:56:09.170501725Z" level=info msg="TearDown network for sandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" successfully" Jan 30 13:56:09.171845 containerd[1796]: time="2025-01-30T13:56:09.171832479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.171889 containerd[1796]: time="2025-01-30T13:56:09.171854399Z" level=info msg="RemovePodSandbox \"987214176db5a8b92a4171d711a23c859d9d0209a5158379e9035a3cc56e41c3\" returns successfully" Jan 30 13:56:09.172055 containerd[1796]: time="2025-01-30T13:56:09.172044470Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" Jan 30 13:56:09.172108 containerd[1796]: time="2025-01-30T13:56:09.172101252Z" level=info msg="TearDown network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" successfully" Jan 30 13:56:09.172190 containerd[1796]: time="2025-01-30T13:56:09.172107827Z" level=info msg="StopPodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" returns successfully" Jan 30 13:56:09.172312 containerd[1796]: time="2025-01-30T13:56:09.172303122Z" level=info msg="RemovePodSandbox for \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" Jan 30 13:56:09.172332 containerd[1796]: time="2025-01-30T13:56:09.172315652Z" level=info msg="Forcibly stopping sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\"" Jan 30 13:56:09.172354 containerd[1796]: time="2025-01-30T13:56:09.172347924Z" level=info msg="TearDown network for sandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" successfully" Jan 30 13:56:09.173785 containerd[1796]: time="2025-01-30T13:56:09.173763935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.173785 containerd[1796]: time="2025-01-30T13:56:09.173780950Z" level=info msg="RemovePodSandbox \"6a2c9336581056bbb4cee295e46538d0918d6472e53273245bfad5afb2a2e91f\" returns successfully" Jan 30 13:56:09.174036 containerd[1796]: time="2025-01-30T13:56:09.173997559Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\"" Jan 30 13:56:09.174069 containerd[1796]: time="2025-01-30T13:56:09.174040218Z" level=info msg="TearDown network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" successfully" Jan 30 13:56:09.174069 containerd[1796]: time="2025-01-30T13:56:09.174050203Z" level=info msg="StopPodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" returns successfully" Jan 30 13:56:09.174209 containerd[1796]: time="2025-01-30T13:56:09.174195661Z" level=info msg="RemovePodSandbox for \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\"" Jan 30 13:56:09.174209 containerd[1796]: time="2025-01-30T13:56:09.174205165Z" level=info msg="Forcibly stopping sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\"" Jan 30 13:56:09.174250 containerd[1796]: time="2025-01-30T13:56:09.174235901Z" level=info msg="TearDown network for sandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" successfully" Jan 30 13:56:09.175499 containerd[1796]: time="2025-01-30T13:56:09.175488307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.175535 containerd[1796]: time="2025-01-30T13:56:09.175504644Z" level=info msg="RemovePodSandbox \"ca64e5e627b628eb2c5bdb479bdfc1bf8ee4067dae6c9063c72177a4c43604ff\" returns successfully" Jan 30 13:56:09.175631 containerd[1796]: time="2025-01-30T13:56:09.175618823Z" level=info msg="StopPodSandbox for \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\"" Jan 30 13:56:09.175680 containerd[1796]: time="2025-01-30T13:56:09.175671568Z" level=info msg="TearDown network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" successfully" Jan 30 13:56:09.175705 containerd[1796]: time="2025-01-30T13:56:09.175679365Z" level=info msg="StopPodSandbox for \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" returns successfully" Jan 30 13:56:09.175782 containerd[1796]: time="2025-01-30T13:56:09.175771247Z" level=info msg="RemovePodSandbox for \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\"" Jan 30 13:56:09.175828 containerd[1796]: time="2025-01-30T13:56:09.175784473Z" level=info msg="Forcibly stopping sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\"" Jan 30 13:56:09.175863 containerd[1796]: time="2025-01-30T13:56:09.175829431Z" level=info msg="TearDown network for sandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" successfully" Jan 30 13:56:09.177056 containerd[1796]: time="2025-01-30T13:56:09.177027533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.177116 containerd[1796]: time="2025-01-30T13:56:09.177066255Z" level=info msg="RemovePodSandbox \"89547431e8c0128423e74725fd34d0de5464a9aee4f2842138ea3f7abb719368\" returns successfully" Jan 30 13:56:09.177209 containerd[1796]: time="2025-01-30T13:56:09.177199057Z" level=info msg="StopPodSandbox for \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\"" Jan 30 13:56:09.177247 containerd[1796]: time="2025-01-30T13:56:09.177239960Z" level=info msg="TearDown network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\" successfully" Jan 30 13:56:09.177272 containerd[1796]: time="2025-01-30T13:56:09.177246793Z" level=info msg="StopPodSandbox for \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\" returns successfully" Jan 30 13:56:09.177353 containerd[1796]: time="2025-01-30T13:56:09.177345189Z" level=info msg="RemovePodSandbox for \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\"" Jan 30 13:56:09.177375 containerd[1796]: time="2025-01-30T13:56:09.177355675Z" level=info msg="Forcibly stopping sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\"" Jan 30 13:56:09.177404 containerd[1796]: time="2025-01-30T13:56:09.177388346Z" level=info msg="TearDown network for sandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\" successfully" Jan 30 13:56:09.178537 containerd[1796]: time="2025-01-30T13:56:09.178482423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.178537 containerd[1796]: time="2025-01-30T13:56:09.178514135Z" level=info msg="RemovePodSandbox \"3e3077bcfbea331caa072d5bffbe8a435b42398b7f6c00c6e8edec060e42b2d5\" returns successfully" Jan 30 13:56:09.178704 containerd[1796]: time="2025-01-30T13:56:09.178695820Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:56:09.178735 containerd[1796]: time="2025-01-30T13:56:09.178731106Z" level=info msg="TearDown network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" successfully" Jan 30 13:56:09.178754 containerd[1796]: time="2025-01-30T13:56:09.178736718Z" level=info msg="StopPodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" returns successfully" Jan 30 13:56:09.178918 containerd[1796]: time="2025-01-30T13:56:09.178892166Z" level=info msg="RemovePodSandbox for \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:56:09.178958 containerd[1796]: time="2025-01-30T13:56:09.178920167Z" level=info msg="Forcibly stopping sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\"" Jan 30 13:56:09.178984 containerd[1796]: time="2025-01-30T13:56:09.178969015Z" level=info msg="TearDown network for sandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" successfully" Jan 30 13:56:09.180275 containerd[1796]: time="2025-01-30T13:56:09.180240042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.180275 containerd[1796]: time="2025-01-30T13:56:09.180258271Z" level=info msg="RemovePodSandbox \"58a727b9ec7f77e288e252360e5437b024436032d9b7368de2cf599c70a4bcb4\" returns successfully" Jan 30 13:56:09.180394 containerd[1796]: time="2025-01-30T13:56:09.180383448Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" Jan 30 13:56:09.180495 containerd[1796]: time="2025-01-30T13:56:09.180429060Z" level=info msg="TearDown network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" successfully" Jan 30 13:56:09.180495 containerd[1796]: time="2025-01-30T13:56:09.180454690Z" level=info msg="StopPodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" returns successfully" Jan 30 13:56:09.180662 containerd[1796]: time="2025-01-30T13:56:09.180636611Z" level=info msg="RemovePodSandbox for \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" Jan 30 13:56:09.180699 containerd[1796]: time="2025-01-30T13:56:09.180663692Z" level=info msg="Forcibly stopping sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\"" Jan 30 13:56:09.180761 containerd[1796]: time="2025-01-30T13:56:09.180706757Z" level=info msg="TearDown network for sandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" successfully" Jan 30 13:56:09.181830 containerd[1796]: time="2025-01-30T13:56:09.181817428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.181884 containerd[1796]: time="2025-01-30T13:56:09.181839626Z" level=info msg="RemovePodSandbox \"efdec4b19c006aba773fb7e6f6c85775a59d989eb15c6331efaa4c0457c8841f\" returns successfully" Jan 30 13:56:09.182004 containerd[1796]: time="2025-01-30T13:56:09.181992261Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\"" Jan 30 13:56:09.182058 containerd[1796]: time="2025-01-30T13:56:09.182035277Z" level=info msg="TearDown network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" successfully" Jan 30 13:56:09.182079 containerd[1796]: time="2025-01-30T13:56:09.182057880Z" level=info msg="StopPodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" returns successfully" Jan 30 13:56:09.182192 containerd[1796]: time="2025-01-30T13:56:09.182183406Z" level=info msg="RemovePodSandbox for \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\"" Jan 30 13:56:09.182213 containerd[1796]: time="2025-01-30T13:56:09.182194319Z" level=info msg="Forcibly stopping sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\"" Jan 30 13:56:09.182256 containerd[1796]: time="2025-01-30T13:56:09.182241816Z" level=info msg="TearDown network for sandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" successfully" Jan 30 13:56:09.183360 containerd[1796]: time="2025-01-30T13:56:09.183349170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.183395 containerd[1796]: time="2025-01-30T13:56:09.183365722Z" level=info msg="RemovePodSandbox \"2d1159c3d9c23c9d038125acbb78d8fecf1aadbe8836cd8c7bf9d391b3fa275e\" returns successfully" Jan 30 13:56:09.183563 containerd[1796]: time="2025-01-30T13:56:09.183537547Z" level=info msg="StopPodSandbox for \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\"" Jan 30 13:56:09.183636 containerd[1796]: time="2025-01-30T13:56:09.183627480Z" level=info msg="TearDown network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" successfully" Jan 30 13:56:09.183688 containerd[1796]: time="2025-01-30T13:56:09.183635175Z" level=info msg="StopPodSandbox for \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" returns successfully" Jan 30 13:56:09.183791 containerd[1796]: time="2025-01-30T13:56:09.183780214Z" level=info msg="RemovePodSandbox for \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\"" Jan 30 13:56:09.183825 containerd[1796]: time="2025-01-30T13:56:09.183792621Z" level=info msg="Forcibly stopping sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\"" Jan 30 13:56:09.183871 containerd[1796]: time="2025-01-30T13:56:09.183835820Z" level=info msg="TearDown network for sandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" successfully" Jan 30 13:56:09.185000 containerd[1796]: time="2025-01-30T13:56:09.184988635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.185057 containerd[1796]: time="2025-01-30T13:56:09.185008579Z" level=info msg="RemovePodSandbox \"23ec8fa454a4c33b71d6b3403cf0cb3695fcae463934c560c8763965ff81ec1b\" returns successfully" Jan 30 13:56:09.185165 containerd[1796]: time="2025-01-30T13:56:09.185155479Z" level=info msg="StopPodSandbox for \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\"" Jan 30 13:56:09.185236 containerd[1796]: time="2025-01-30T13:56:09.185228988Z" level=info msg="TearDown network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\" successfully" Jan 30 13:56:09.185258 containerd[1796]: time="2025-01-30T13:56:09.185235599Z" level=info msg="StopPodSandbox for \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\" returns successfully" Jan 30 13:56:09.185370 containerd[1796]: time="2025-01-30T13:56:09.185361737Z" level=info msg="RemovePodSandbox for \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\"" Jan 30 13:56:09.185392 containerd[1796]: time="2025-01-30T13:56:09.185373297Z" level=info msg="Forcibly stopping sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\"" Jan 30 13:56:09.185421 containerd[1796]: time="2025-01-30T13:56:09.185405552Z" level=info msg="TearDown network for sandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\" successfully" Jan 30 13:56:09.186655 containerd[1796]: time="2025-01-30T13:56:09.186641925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.186716 containerd[1796]: time="2025-01-30T13:56:09.186659796Z" level=info msg="RemovePodSandbox \"e608f5701b459d4184bd6281780f6e97e79fd550412028f9e5fcaf8e47e7d3c9\" returns successfully" Jan 30 13:56:09.186886 containerd[1796]: time="2025-01-30T13:56:09.186837403Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:56:09.186940 containerd[1796]: time="2025-01-30T13:56:09.186911615Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:56:09.186940 containerd[1796]: time="2025-01-30T13:56:09.186918052Z" level=info msg="StopPodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:56:09.187120 containerd[1796]: time="2025-01-30T13:56:09.187109315Z" level=info msg="RemovePodSandbox for \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:56:09.187164 containerd[1796]: time="2025-01-30T13:56:09.187121098Z" level=info msg="Forcibly stopping sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\"" Jan 30 13:56:09.187184 containerd[1796]: time="2025-01-30T13:56:09.187168584Z" level=info msg="TearDown network for sandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" successfully" Jan 30 13:56:09.188271 containerd[1796]: time="2025-01-30T13:56:09.188260546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.188296 containerd[1796]: time="2025-01-30T13:56:09.188278090Z" level=info msg="RemovePodSandbox \"3cc6c39c65610ed04cb3768fd8a9be3c8096872510acd8baf2053d88df9e7259\" returns successfully" Jan 30 13:56:09.188453 containerd[1796]: time="2025-01-30T13:56:09.188443769Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:56:09.188518 containerd[1796]: time="2025-01-30T13:56:09.188495691Z" level=info msg="TearDown network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" successfully" Jan 30 13:56:09.188546 containerd[1796]: time="2025-01-30T13:56:09.188519063Z" level=info msg="StopPodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" returns successfully" Jan 30 13:56:09.188721 containerd[1796]: time="2025-01-30T13:56:09.188674151Z" level=info msg="RemovePodSandbox for \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:56:09.188721 containerd[1796]: time="2025-01-30T13:56:09.188698409Z" level=info msg="Forcibly stopping sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\"" Jan 30 13:56:09.188801 containerd[1796]: time="2025-01-30T13:56:09.188741965Z" level=info msg="TearDown network for sandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" successfully" Jan 30 13:56:09.189844 containerd[1796]: time="2025-01-30T13:56:09.189834344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.189873 containerd[1796]: time="2025-01-30T13:56:09.189851608Z" level=info msg="RemovePodSandbox \"36b3d090a5d86cd213352ca64c65225f17133136b139a92fdff8cfde29eeb626\" returns successfully" Jan 30 13:56:09.190096 containerd[1796]: time="2025-01-30T13:56:09.190086322Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" Jan 30 13:56:09.190168 containerd[1796]: time="2025-01-30T13:56:09.190161020Z" level=info msg="TearDown network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" successfully" Jan 30 13:56:09.190168 containerd[1796]: time="2025-01-30T13:56:09.190167603Z" level=info msg="StopPodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" returns successfully" Jan 30 13:56:09.190366 containerd[1796]: time="2025-01-30T13:56:09.190357557Z" level=info msg="RemovePodSandbox for \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" Jan 30 13:56:09.190393 containerd[1796]: time="2025-01-30T13:56:09.190367787Z" level=info msg="Forcibly stopping sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\"" Jan 30 13:56:09.190412 containerd[1796]: time="2025-01-30T13:56:09.190395361Z" level=info msg="TearDown network for sandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" successfully" Jan 30 13:56:09.191496 containerd[1796]: time="2025-01-30T13:56:09.191470309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.191544 containerd[1796]: time="2025-01-30T13:56:09.191519168Z" level=info msg="RemovePodSandbox \"9496a6f1b847a48496a285cc08d9420aab088a09fd726294bb7f4225252154c3\" returns successfully" Jan 30 13:56:09.191721 containerd[1796]: time="2025-01-30T13:56:09.191712147Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\"" Jan 30 13:56:09.191782 containerd[1796]: time="2025-01-30T13:56:09.191773832Z" level=info msg="TearDown network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" successfully" Jan 30 13:56:09.191782 containerd[1796]: time="2025-01-30T13:56:09.191780566Z" level=info msg="StopPodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" returns successfully" Jan 30 13:56:09.191935 containerd[1796]: time="2025-01-30T13:56:09.191927228Z" level=info msg="RemovePodSandbox for \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\"" Jan 30 13:56:09.192019 containerd[1796]: time="2025-01-30T13:56:09.191961079Z" level=info msg="Forcibly stopping sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\"" Jan 30 13:56:09.192047 containerd[1796]: time="2025-01-30T13:56:09.192040931Z" level=info msg="TearDown network for sandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" successfully" Jan 30 13:56:09.193211 containerd[1796]: time="2025-01-30T13:56:09.193184916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.193239 containerd[1796]: time="2025-01-30T13:56:09.193220730Z" level=info msg="RemovePodSandbox \"62d626ca645780d60904163cf20739a0d8bc4f76543bf855a22bb60bada989a7\" returns successfully" Jan 30 13:56:09.193369 containerd[1796]: time="2025-01-30T13:56:09.193358776Z" level=info msg="StopPodSandbox for \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\"" Jan 30 13:56:09.193403 containerd[1796]: time="2025-01-30T13:56:09.193395888Z" level=info msg="TearDown network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" successfully" Jan 30 13:56:09.193403 containerd[1796]: time="2025-01-30T13:56:09.193401926Z" level=info msg="StopPodSandbox for \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" returns successfully" Jan 30 13:56:09.193540 containerd[1796]: time="2025-01-30T13:56:09.193528645Z" level=info msg="RemovePodSandbox for \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\"" Jan 30 13:56:09.193577 containerd[1796]: time="2025-01-30T13:56:09.193542504Z" level=info msg="Forcibly stopping sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\"" Jan 30 13:56:09.193615 containerd[1796]: time="2025-01-30T13:56:09.193589249Z" level=info msg="TearDown network for sandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" successfully" Jan 30 13:56:09.194926 containerd[1796]: time="2025-01-30T13:56:09.194914148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.194966 containerd[1796]: time="2025-01-30T13:56:09.194935666Z" level=info msg="RemovePodSandbox \"7833859445a12987cddd0fa503ce93de535f65550fb2b3ac59b0b1f7f5d2fe61\" returns successfully" Jan 30 13:56:09.195144 containerd[1796]: time="2025-01-30T13:56:09.195131526Z" level=info msg="StopPodSandbox for \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\"" Jan 30 13:56:09.195199 containerd[1796]: time="2025-01-30T13:56:09.195189076Z" level=info msg="TearDown network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\" successfully" Jan 30 13:56:09.195199 containerd[1796]: time="2025-01-30T13:56:09.195195529Z" level=info msg="StopPodSandbox for \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\" returns successfully" Jan 30 13:56:09.195295 containerd[1796]: time="2025-01-30T13:56:09.195284371Z" level=info msg="RemovePodSandbox for \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\"" Jan 30 13:56:09.195321 containerd[1796]: time="2025-01-30T13:56:09.195295158Z" level=info msg="Forcibly stopping sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\"" Jan 30 13:56:09.195340 containerd[1796]: time="2025-01-30T13:56:09.195324869Z" level=info msg="TearDown network for sandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\" successfully" Jan 30 13:56:09.196473 containerd[1796]: time="2025-01-30T13:56:09.196435589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.196515 containerd[1796]: time="2025-01-30T13:56:09.196475012Z" level=info msg="RemovePodSandbox \"2065339bad884ce5c03fc80838fa2307c309bf51ccd7dff9fb4039f43c2f7314\" returns successfully" Jan 30 13:56:09.196718 containerd[1796]: time="2025-01-30T13:56:09.196689494Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:56:09.196796 containerd[1796]: time="2025-01-30T13:56:09.196784524Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:56:09.196864 containerd[1796]: time="2025-01-30T13:56:09.196795406Z" level=info msg="StopPodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:56:09.196990 containerd[1796]: time="2025-01-30T13:56:09.196981095Z" level=info msg="RemovePodSandbox for \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:56:09.197050 containerd[1796]: time="2025-01-30T13:56:09.196990435Z" level=info msg="Forcibly stopping sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\"" Jan 30 13:56:09.197096 containerd[1796]: time="2025-01-30T13:56:09.197052272Z" level=info msg="TearDown network for sandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" successfully" Jan 30 13:56:09.198890 containerd[1796]: time="2025-01-30T13:56:09.198845609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.198890 containerd[1796]: time="2025-01-30T13:56:09.198886325Z" level=info msg="RemovePodSandbox \"e8bee97a41f1a4e80efad0c63e2afbc025bcb408c28b9cac77f074834e18699d\" returns successfully" Jan 30 13:56:09.199132 containerd[1796]: time="2025-01-30T13:56:09.199078983Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:56:09.199226 containerd[1796]: time="2025-01-30T13:56:09.199169218Z" level=info msg="TearDown network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" successfully" Jan 30 13:56:09.199226 containerd[1796]: time="2025-01-30T13:56:09.199197249Z" level=info msg="StopPodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" returns successfully" Jan 30 13:56:09.199308 containerd[1796]: time="2025-01-30T13:56:09.199297100Z" level=info msg="RemovePodSandbox for \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:56:09.199332 containerd[1796]: time="2025-01-30T13:56:09.199313570Z" level=info msg="Forcibly stopping sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\"" Jan 30 13:56:09.199391 containerd[1796]: time="2025-01-30T13:56:09.199363116Z" level=info msg="TearDown network for sandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" successfully" Jan 30 13:56:09.200735 containerd[1796]: time="2025-01-30T13:56:09.200671585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.200735 containerd[1796]: time="2025-01-30T13:56:09.200703326Z" level=info msg="RemovePodSandbox \"803927746a6fec5b2283423cfee60b93c0580637ee677cd17148636d72785d22\" returns successfully" Jan 30 13:56:09.200980 containerd[1796]: time="2025-01-30T13:56:09.200919075Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" Jan 30 13:56:09.201023 containerd[1796]: time="2025-01-30T13:56:09.200997168Z" level=info msg="TearDown network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" successfully" Jan 30 13:56:09.201048 containerd[1796]: time="2025-01-30T13:56:09.201024673Z" level=info msg="StopPodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" returns successfully" Jan 30 13:56:09.201170 containerd[1796]: time="2025-01-30T13:56:09.201132138Z" level=info msg="RemovePodSandbox for \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" Jan 30 13:56:09.201170 containerd[1796]: time="2025-01-30T13:56:09.201143766Z" level=info msg="Forcibly stopping sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\"" Jan 30 13:56:09.201220 containerd[1796]: time="2025-01-30T13:56:09.201178782Z" level=info msg="TearDown network for sandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" successfully" Jan 30 13:56:09.202401 containerd[1796]: time="2025-01-30T13:56:09.202361022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.202401 containerd[1796]: time="2025-01-30T13:56:09.202378627Z" level=info msg="RemovePodSandbox \"03d9f3244f943aab4b1d4d8885552eb1eb0c7b45f127f00430ee07f770125da9\" returns successfully" Jan 30 13:56:09.202535 containerd[1796]: time="2025-01-30T13:56:09.202476374Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\"" Jan 30 13:56:09.202566 containerd[1796]: time="2025-01-30T13:56:09.202555409Z" level=info msg="TearDown network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" successfully" Jan 30 13:56:09.202566 containerd[1796]: time="2025-01-30T13:56:09.202562746Z" level=info msg="StopPodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" returns successfully" Jan 30 13:56:09.202732 containerd[1796]: time="2025-01-30T13:56:09.202690928Z" level=info msg="RemovePodSandbox for \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\"" Jan 30 13:56:09.202732 containerd[1796]: time="2025-01-30T13:56:09.202704298Z" level=info msg="Forcibly stopping sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\"" Jan 30 13:56:09.202782 containerd[1796]: time="2025-01-30T13:56:09.202751112Z" level=info msg="TearDown network for sandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" successfully" Jan 30 13:56:09.203901 containerd[1796]: time="2025-01-30T13:56:09.203859327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.203901 containerd[1796]: time="2025-01-30T13:56:09.203876532Z" level=info msg="RemovePodSandbox \"2a714fd176d6c9029837244931d255d4606d7b42a6401fe98c01f05ecdc278ef\" returns successfully" Jan 30 13:56:09.204118 containerd[1796]: time="2025-01-30T13:56:09.204042960Z" level=info msg="StopPodSandbox for \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\"" Jan 30 13:56:09.204182 containerd[1796]: time="2025-01-30T13:56:09.204131066Z" level=info msg="TearDown network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" successfully" Jan 30 13:56:09.204220 containerd[1796]: time="2025-01-30T13:56:09.204181899Z" level=info msg="StopPodSandbox for \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" returns successfully" Jan 30 13:56:09.204333 containerd[1796]: time="2025-01-30T13:56:09.204322186Z" level=info msg="RemovePodSandbox for \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\"" Jan 30 13:56:09.204359 containerd[1796]: time="2025-01-30T13:56:09.204335754Z" level=info msg="Forcibly stopping sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\"" Jan 30 13:56:09.204382 containerd[1796]: time="2025-01-30T13:56:09.204366592Z" level=info msg="TearDown network for sandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" successfully" Jan 30 13:56:09.205590 containerd[1796]: time="2025-01-30T13:56:09.205500436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.205590 containerd[1796]: time="2025-01-30T13:56:09.205544604Z" level=info msg="RemovePodSandbox \"0ab73d839e731dc509419ff0a1eda39d584d24bf339a2065a69adee096ccc444\" returns successfully" Jan 30 13:56:09.205804 containerd[1796]: time="2025-01-30T13:56:09.205758465Z" level=info msg="StopPodSandbox for \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\"" Jan 30 13:56:09.205853 containerd[1796]: time="2025-01-30T13:56:09.205840230Z" level=info msg="TearDown network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\" successfully" Jan 30 13:56:09.205853 containerd[1796]: time="2025-01-30T13:56:09.205846476Z" level=info msg="StopPodSandbox for \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\" returns successfully" Jan 30 13:56:09.206045 containerd[1796]: time="2025-01-30T13:56:09.206011874Z" level=info msg="RemovePodSandbox for \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\"" Jan 30 13:56:09.206045 containerd[1796]: time="2025-01-30T13:56:09.206039112Z" level=info msg="Forcibly stopping sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\"" Jan 30 13:56:09.206096 containerd[1796]: time="2025-01-30T13:56:09.206080973Z" level=info msg="TearDown network for sandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\" successfully" Jan 30 13:56:09.207292 containerd[1796]: time="2025-01-30T13:56:09.207239107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.207292 containerd[1796]: time="2025-01-30T13:56:09.207269083Z" level=info msg="RemovePodSandbox \"221bbee3aa918f549450963346ef43f798cf60dd4358a88fb22dfbf948260d62\" returns successfully" Jan 30 13:56:09.207419 containerd[1796]: time="2025-01-30T13:56:09.207409059Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:56:09.207537 containerd[1796]: time="2025-01-30T13:56:09.207490872Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:56:09.207537 containerd[1796]: time="2025-01-30T13:56:09.207497787Z" level=info msg="StopPodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:56:09.207723 containerd[1796]: time="2025-01-30T13:56:09.207689562Z" level=info msg="RemovePodSandbox for \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:56:09.207723 containerd[1796]: time="2025-01-30T13:56:09.207719663Z" level=info msg="Forcibly stopping sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\"" Jan 30 13:56:09.207775 containerd[1796]: time="2025-01-30T13:56:09.207752104Z" level=info msg="TearDown network for sandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" successfully" Jan 30 13:56:09.208892 containerd[1796]: time="2025-01-30T13:56:09.208837217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.208892 containerd[1796]: time="2025-01-30T13:56:09.208870183Z" level=info msg="RemovePodSandbox \"8b7f80275f8116dd70bcab24125c5c554000b37ef740fdd32086d2e739bf2bd6\" returns successfully" Jan 30 13:56:09.209100 containerd[1796]: time="2025-01-30T13:56:09.209066212Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:56:09.209143 containerd[1796]: time="2025-01-30T13:56:09.209130221Z" level=info msg="TearDown network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" successfully" Jan 30 13:56:09.209143 containerd[1796]: time="2025-01-30T13:56:09.209137944Z" level=info msg="StopPodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" returns successfully" Jan 30 13:56:09.209352 containerd[1796]: time="2025-01-30T13:56:09.209319468Z" level=info msg="RemovePodSandbox for \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:56:09.209352 containerd[1796]: time="2025-01-30T13:56:09.209347736Z" level=info msg="Forcibly stopping sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\"" Jan 30 13:56:09.209425 containerd[1796]: time="2025-01-30T13:56:09.209392207Z" level=info msg="TearDown network for sandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" successfully" Jan 30 13:56:09.210589 containerd[1796]: time="2025-01-30T13:56:09.210550504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.210589 containerd[1796]: time="2025-01-30T13:56:09.210566242Z" level=info msg="RemovePodSandbox \"1edc528e45b4ffb6d7f39512c7ecdeb318acdb5489ee8573e53b67b5f9fd920a\" returns successfully" Jan 30 13:56:09.210843 containerd[1796]: time="2025-01-30T13:56:09.210792927Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" Jan 30 13:56:09.210843 containerd[1796]: time="2025-01-30T13:56:09.210836070Z" level=info msg="TearDown network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" successfully" Jan 30 13:56:09.210843 containerd[1796]: time="2025-01-30T13:56:09.210842328Z" level=info msg="StopPodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" returns successfully" Jan 30 13:56:09.211045 containerd[1796]: time="2025-01-30T13:56:09.211003956Z" level=info msg="RemovePodSandbox for \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" Jan 30 13:56:09.211045 containerd[1796]: time="2025-01-30T13:56:09.211013113Z" level=info msg="Forcibly stopping sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\"" Jan 30 13:56:09.211108 containerd[1796]: time="2025-01-30T13:56:09.211057780Z" level=info msg="TearDown network for sandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" successfully" Jan 30 13:56:09.212221 containerd[1796]: time="2025-01-30T13:56:09.212178178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.212221 containerd[1796]: time="2025-01-30T13:56:09.212194629Z" level=info msg="RemovePodSandbox \"7a9fe755e5cdee7ad8b63d6de819102c57cd442f0c67177a07829f332dd9a0a2\" returns successfully" Jan 30 13:56:09.212345 containerd[1796]: time="2025-01-30T13:56:09.212335917Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\"" Jan 30 13:56:09.212381 containerd[1796]: time="2025-01-30T13:56:09.212374798Z" level=info msg="TearDown network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" successfully" Jan 30 13:56:09.212402 containerd[1796]: time="2025-01-30T13:56:09.212381217Z" level=info msg="StopPodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" returns successfully" Jan 30 13:56:09.212574 containerd[1796]: time="2025-01-30T13:56:09.212533866Z" level=info msg="RemovePodSandbox for \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\"" Jan 30 13:56:09.212574 containerd[1796]: time="2025-01-30T13:56:09.212546076Z" level=info msg="Forcibly stopping sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\"" Jan 30 13:56:09.212619 containerd[1796]: time="2025-01-30T13:56:09.212573621Z" level=info msg="TearDown network for sandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" successfully" Jan 30 13:56:09.213658 containerd[1796]: time="2025-01-30T13:56:09.213619359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.213658 containerd[1796]: time="2025-01-30T13:56:09.213635504Z" level=info msg="RemovePodSandbox \"25f97740f0d52f17c8f787db1d415465bae3a80a530c50f113ab5e5a7582dbf1\" returns successfully" Jan 30 13:56:09.213850 containerd[1796]: time="2025-01-30T13:56:09.213803852Z" level=info msg="StopPodSandbox for \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\"" Jan 30 13:56:09.213913 containerd[1796]: time="2025-01-30T13:56:09.213886860Z" level=info msg="TearDown network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" successfully" Jan 30 13:56:09.213913 containerd[1796]: time="2025-01-30T13:56:09.213893183Z" level=info msg="StopPodSandbox for \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" returns successfully" Jan 30 13:56:09.214111 containerd[1796]: time="2025-01-30T13:56:09.214050698Z" level=info msg="RemovePodSandbox for \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\"" Jan 30 13:56:09.214111 containerd[1796]: time="2025-01-30T13:56:09.214062894Z" level=info msg="Forcibly stopping sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\"" Jan 30 13:56:09.214162 containerd[1796]: time="2025-01-30T13:56:09.214111864Z" level=info msg="TearDown network for sandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" successfully" Jan 30 13:56:09.215258 containerd[1796]: time="2025-01-30T13:56:09.215219070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.215258 containerd[1796]: time="2025-01-30T13:56:09.215235780Z" level=info msg="RemovePodSandbox \"cff2888944473b06def649b1b22bd9a7026a96579f5ec792b22002a3ce8a0ea8\" returns successfully" Jan 30 13:56:09.215343 containerd[1796]: time="2025-01-30T13:56:09.215332805Z" level=info msg="StopPodSandbox for \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\"" Jan 30 13:56:09.215380 containerd[1796]: time="2025-01-30T13:56:09.215373686Z" level=info msg="TearDown network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\" successfully" Jan 30 13:56:09.215399 containerd[1796]: time="2025-01-30T13:56:09.215380842Z" level=info msg="StopPodSandbox for \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\" returns successfully" Jan 30 13:56:09.215505 containerd[1796]: time="2025-01-30T13:56:09.215474057Z" level=info msg="RemovePodSandbox for \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\"" Jan 30 13:56:09.215505 containerd[1796]: time="2025-01-30T13:56:09.215483925Z" level=info msg="Forcibly stopping sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\"" Jan 30 13:56:09.215557 containerd[1796]: time="2025-01-30T13:56:09.215512697Z" level=info msg="TearDown network for sandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\" successfully" Jan 30 13:56:09.216569 containerd[1796]: time="2025-01-30T13:56:09.216530356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.216569 containerd[1796]: time="2025-01-30T13:56:09.216547837Z" level=info msg="RemovePodSandbox \"7a2f5f4adb3a85b74907155c19ed9e2b339d3d758bef2044eca7a788dca03c93\" returns successfully" Jan 30 13:56:09.216729 containerd[1796]: time="2025-01-30T13:56:09.216690619Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:56:09.216764 containerd[1796]: time="2025-01-30T13:56:09.216729186Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:56:09.216764 containerd[1796]: time="2025-01-30T13:56:09.216735305Z" level=info msg="StopPodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:56:09.216868 containerd[1796]: time="2025-01-30T13:56:09.216832370Z" level=info msg="RemovePodSandbox for \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:56:09.216868 containerd[1796]: time="2025-01-30T13:56:09.216841730Z" level=info msg="Forcibly stopping sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\"" Jan 30 13:56:09.216918 containerd[1796]: time="2025-01-30T13:56:09.216879092Z" level=info msg="TearDown network for sandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" successfully" Jan 30 13:56:09.218061 containerd[1796]: time="2025-01-30T13:56:09.218051020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.218082 containerd[1796]: time="2025-01-30T13:56:09.218069257Z" level=info msg="RemovePodSandbox \"a0227799800e6b223a42c7d65739ce3aad37b995ac22b636fd370ab113d5a3e5\" returns successfully" Jan 30 13:56:09.218244 containerd[1796]: time="2025-01-30T13:56:09.218206128Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:56:09.218279 containerd[1796]: time="2025-01-30T13:56:09.218250203Z" level=info msg="TearDown network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" successfully" Jan 30 13:56:09.218279 containerd[1796]: time="2025-01-30T13:56:09.218257166Z" level=info msg="StopPodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" returns successfully" Jan 30 13:56:09.218363 containerd[1796]: time="2025-01-30T13:56:09.218354426Z" level=info msg="RemovePodSandbox for \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:56:09.218384 containerd[1796]: time="2025-01-30T13:56:09.218363907Z" level=info msg="Forcibly stopping sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\"" Jan 30 13:56:09.218413 containerd[1796]: time="2025-01-30T13:56:09.218397455Z" level=info msg="TearDown network for sandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" successfully" Jan 30 13:56:09.219532 containerd[1796]: time="2025-01-30T13:56:09.219490997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.219532 containerd[1796]: time="2025-01-30T13:56:09.219509673Z" level=info msg="RemovePodSandbox \"79c457487836ce45d50ea408e1165f53036776ea653a13e19b112b95b807ad65\" returns successfully" Jan 30 13:56:09.219680 containerd[1796]: time="2025-01-30T13:56:09.219641663Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" Jan 30 13:56:09.219714 containerd[1796]: time="2025-01-30T13:56:09.219682480Z" level=info msg="TearDown network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" successfully" Jan 30 13:56:09.219714 containerd[1796]: time="2025-01-30T13:56:09.219688689Z" level=info msg="StopPodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" returns successfully" Jan 30 13:56:09.219846 containerd[1796]: time="2025-01-30T13:56:09.219804895Z" level=info msg="RemovePodSandbox for \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" Jan 30 13:56:09.219846 containerd[1796]: time="2025-01-30T13:56:09.219816535Z" level=info msg="Forcibly stopping sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\"" Jan 30 13:56:09.219899 containerd[1796]: time="2025-01-30T13:56:09.219848863Z" level=info msg="TearDown network for sandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" successfully" Jan 30 13:56:09.220972 containerd[1796]: time="2025-01-30T13:56:09.220933037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.220972 containerd[1796]: time="2025-01-30T13:56:09.220951093Z" level=info msg="RemovePodSandbox \"5cd4c4184c0c714d0c307f001739526717e5fcb9bfa88275208184b99d8d085f\" returns successfully" Jan 30 13:56:09.221114 containerd[1796]: time="2025-01-30T13:56:09.221075784Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\"" Jan 30 13:56:09.221144 containerd[1796]: time="2025-01-30T13:56:09.221113445Z" level=info msg="TearDown network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" successfully" Jan 30 13:56:09.221144 containerd[1796]: time="2025-01-30T13:56:09.221121630Z" level=info msg="StopPodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" returns successfully" Jan 30 13:56:09.221266 containerd[1796]: time="2025-01-30T13:56:09.221225118Z" level=info msg="RemovePodSandbox for \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\"" Jan 30 13:56:09.221266 containerd[1796]: time="2025-01-30T13:56:09.221235407Z" level=info msg="Forcibly stopping sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\"" Jan 30 13:56:09.221312 containerd[1796]: time="2025-01-30T13:56:09.221268061Z" level=info msg="TearDown network for sandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" successfully" Jan 30 13:56:09.222451 containerd[1796]: time="2025-01-30T13:56:09.222404288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.222451 containerd[1796]: time="2025-01-30T13:56:09.222422024Z" level=info msg="RemovePodSandbox \"f225db03a2f79a524706ed110c685935ada20f7107bcd834cf378d0921c5914e\" returns successfully" Jan 30 13:56:09.222575 containerd[1796]: time="2025-01-30T13:56:09.222536923Z" level=info msg="StopPodSandbox for \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\"" Jan 30 13:56:09.222605 containerd[1796]: time="2025-01-30T13:56:09.222583872Z" level=info msg="TearDown network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" successfully" Jan 30 13:56:09.222605 containerd[1796]: time="2025-01-30T13:56:09.222590869Z" level=info msg="StopPodSandbox for \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" returns successfully" Jan 30 13:56:09.222747 containerd[1796]: time="2025-01-30T13:56:09.222707656Z" level=info msg="RemovePodSandbox for \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\"" Jan 30 13:56:09.222747 containerd[1796]: time="2025-01-30T13:56:09.222720807Z" level=info msg="Forcibly stopping sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\"" Jan 30 13:56:09.222797 containerd[1796]: time="2025-01-30T13:56:09.222748242Z" level=info msg="TearDown network for sandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" successfully" Jan 30 13:56:09.223844 containerd[1796]: time="2025-01-30T13:56:09.223804811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.223844 containerd[1796]: time="2025-01-30T13:56:09.223822705Z" level=info msg="RemovePodSandbox \"ce7f277a12c1df06123ba233ed3dcbb374443313886eda9a18f609f1ab254145\" returns successfully" Jan 30 13:56:09.223971 containerd[1796]: time="2025-01-30T13:56:09.223921807Z" level=info msg="StopPodSandbox for \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\"" Jan 30 13:56:09.223971 containerd[1796]: time="2025-01-30T13:56:09.223963611Z" level=info msg="TearDown network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\" successfully" Jan 30 13:56:09.224022 containerd[1796]: time="2025-01-30T13:56:09.223972876Z" level=info msg="StopPodSandbox for \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\" returns successfully" Jan 30 13:56:09.224149 containerd[1796]: time="2025-01-30T13:56:09.224111036Z" level=info msg="RemovePodSandbox for \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\"" Jan 30 13:56:09.224149 containerd[1796]: time="2025-01-30T13:56:09.224122967Z" level=info msg="Forcibly stopping sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\"" Jan 30 13:56:09.224204 containerd[1796]: time="2025-01-30T13:56:09.224152137Z" level=info msg="TearDown network for sandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\" successfully" Jan 30 13:56:09.225246 containerd[1796]: time="2025-01-30T13:56:09.225207048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:09.225246 containerd[1796]: time="2025-01-30T13:56:09.225223952Z" level=info msg="RemovePodSandbox \"a2f49089e10d2bcb77aafaa7a3878097888046fec410ade461e0ca1cbf1b36f8\" returns successfully" Jan 30 13:56:10.073230 systemd[1]: Started sshd@9-147.75.90.195:22-185.112.151.139:56900.service - OpenSSH per-connection server daemon (185.112.151.139:56900). Jan 30 13:56:10.190990 systemd[1]: Started sshd@10-147.75.90.195:22-36.112.132.249:42284.service - OpenSSH per-connection server daemon (36.112.132.249:42284). Jan 30 13:56:11.279927 sshd[7067]: Invalid user test1 from 185.112.151.139 port 56900 Jan 30 13:56:11.512615 sshd[7067]: Received disconnect from 185.112.151.139 port 56900:11: Bye Bye [preauth] Jan 30 13:56:11.512615 sshd[7067]: Disconnected from invalid user test1 185.112.151.139 port 56900 [preauth] Jan 30 13:56:11.516135 systemd[1]: sshd@9-147.75.90.195:22-185.112.151.139:56900.service: Deactivated successfully. Jan 30 13:56:13.669157 systemd[1]: Started sshd@11-147.75.90.195:22-20.220.16.23:41558.service - OpenSSH per-connection server daemon (20.220.16.23:41558). Jan 30 13:56:14.072128 sshd[7074]: Invalid user user1 from 20.220.16.23 port 41558 Jan 30 13:56:14.142807 sshd[7074]: Received disconnect from 20.220.16.23 port 41558:11: Bye Bye [preauth] Jan 30 13:56:14.142807 sshd[7074]: Disconnected from invalid user user1 20.220.16.23 port 41558 [preauth] Jan 30 13:56:14.146362 systemd[1]: sshd@11-147.75.90.195:22-20.220.16.23:41558.service: Deactivated successfully. Jan 30 13:56:41.895125 systemd[1]: Started sshd@12-147.75.90.195:22-183.224.129.218:39472.service - OpenSSH per-connection server daemon (183.224.129.218:39472). Jan 30 13:56:50.355766 systemd[1]: Started sshd@13-147.75.90.195:22-125.124.130.124:37244.service - OpenSSH per-connection server daemon (125.124.130.124:37244). Jan 30 13:57:35.052181 systemd[1]: Started sshd@14-147.75.90.195:22-185.213.165.81:32778.service - OpenSSH per-connection server daemon (185.213.165.81:32778). Jan 30 13:57:36.300985 sshd[7282]: Invalid user test1 from 185.213.165.81 port 32778 Jan 30 13:57:36.535453 sshd[7282]: Received disconnect from 185.213.165.81 port 32778:11: Bye Bye [preauth] Jan 30 13:57:36.535453 sshd[7282]: Disconnected from invalid user test1 185.213.165.81 port 32778 [preauth] Jan 30 13:57:36.538657 systemd[1]: sshd@14-147.75.90.195:22-185.213.165.81:32778.service: Deactivated successfully. Jan 30 13:58:01.534687 systemd[1]: Started sshd@15-147.75.90.195:22-91.205.219.185:42460.service - OpenSSH per-connection server daemon (91.205.219.185:42460). Jan 30 13:58:02.846369 sshd[7336]: Invalid user dev from 91.205.219.185 port 42460 Jan 30 13:58:03.095840 sshd[7336]: Received disconnect from 91.205.219.185 port 42460:11: Bye Bye [preauth] Jan 30 13:58:03.095840 sshd[7336]: Disconnected from invalid user dev 91.205.219.185 port 42460 [preauth] Jan 30 13:58:03.099014 systemd[1]: sshd@15-147.75.90.195:22-91.205.219.185:42460.service: Deactivated successfully. Jan 30 13:58:10.205958 systemd[1]: sshd@10-147.75.90.195:22-36.112.132.249:42284.service: Deactivated successfully. Jan 30 13:58:41.930762 systemd[1]: sshd@12-147.75.90.195:22-183.224.129.218:39472.service: Deactivated successfully. Jan 30 13:58:50.370621 systemd[1]: sshd@13-147.75.90.195:22-125.124.130.124:37244.service: Deactivated successfully. Jan 30 13:58:53.799165 systemd[1]: Started sshd@16-147.75.90.195:22-218.92.0.208:26226.service - OpenSSH per-connection server daemon (218.92.0.208:26226). Jan 30 13:58:53.974689 sshd[7474]: Unable to negotiate with 218.92.0.208 port 26226: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Jan 30 13:58:53.976980 systemd[1]: sshd@16-147.75.90.195:22-218.92.0.208:26226.service: Deactivated successfully. Jan 30 13:59:04.213215 systemd[1]: Started sshd@17-147.75.90.195:22-20.220.16.23:59948.service - OpenSSH per-connection server daemon (20.220.16.23:59948). Jan 30 13:59:04.634899 sshd[7499]: Invalid user test1 from 20.220.16.23 port 59948 Jan 30 13:59:04.704196 sshd[7499]: Received disconnect from 20.220.16.23 port 59948:11: Bye Bye [preauth] Jan 30 13:59:04.704196 sshd[7499]: Disconnected from invalid user test1 20.220.16.23 port 59948 [preauth] Jan 30 13:59:04.707443 systemd[1]: sshd@17-147.75.90.195:22-20.220.16.23:59948.service: Deactivated successfully. Jan 30 13:59:14.699287 systemd[1]: Started sshd@18-147.75.90.195:22-185.112.151.139:57322.service - OpenSSH per-connection server daemon (185.112.151.139:57322). Jan 30 13:59:15.897876 sshd[7533]: Invalid user git from 185.112.151.139 port 57322 Jan 30 13:59:16.130071 sshd[7533]: Received disconnect from 185.112.151.139 port 57322:11: Bye Bye [preauth] Jan 30 13:59:16.130071 sshd[7533]: Disconnected from invalid user git 185.112.151.139 port 57322 [preauth] Jan 30 13:59:16.133322 systemd[1]: sshd@18-147.75.90.195:22-185.112.151.139:57322.service: Deactivated successfully. Jan 30 13:59:39.357573 systemd[1]: Started sshd@19-147.75.90.195:22-185.213.165.81:49874.service - OpenSSH per-connection server daemon (185.213.165.81:49874). Jan 30 13:59:40.732442 sshd[7608]: Invalid user debian from 185.213.165.81 port 49874 Jan 30 13:59:40.995007 sshd[7608]: Received disconnect from 185.213.165.81 port 49874:11: Bye Bye [preauth] Jan 30 13:59:40.995007 sshd[7608]: Disconnected from invalid user debian 185.213.165.81 port 49874 [preauth] Jan 30 13:59:40.998219 systemd[1]: sshd@19-147.75.90.195:22-185.213.165.81:49874.service: Deactivated successfully. Jan 30 14:00:22.166791 systemd[1]: Started sshd@20-147.75.90.195:22-20.220.16.23:47876.service - OpenSSH per-connection server daemon (20.220.16.23:47876). Jan 30 14:00:22.584947 sshd[7691]: Invalid user server from 20.220.16.23 port 47876 Jan 30 14:00:22.651983 sshd[7691]: Received disconnect from 20.220.16.23 port 47876:11: Bye Bye [preauth] Jan 30 14:00:22.651983 sshd[7691]: Disconnected from invalid user server 20.220.16.23 port 47876 [preauth] Jan 30 14:00:22.655235 systemd[1]: sshd@20-147.75.90.195:22-20.220.16.23:47876.service: Deactivated successfully. Jan 30 14:00:34.857926 systemd[1]: Started sshd@21-147.75.90.195:22-185.112.151.139:50572.service - OpenSSH per-connection server daemon (185.112.151.139:50572). Jan 30 14:00:36.077295 sshd[7733]: Invalid user dev from 185.112.151.139 port 50572 Jan 30 14:00:36.304246 sshd[7733]: Received disconnect from 185.112.151.139 port 50572:11: Bye Bye [preauth] Jan 30 14:00:36.304246 sshd[7733]: Disconnected from invalid user dev 185.112.151.139 port 50572 [preauth] Jan 30 14:00:36.307507 systemd[1]: sshd@21-147.75.90.195:22-185.112.151.139:50572.service: Deactivated successfully. Jan 30 14:00:52.504062 systemd[1]: Started sshd@22-147.75.90.195:22-185.213.165.81:48514.service - OpenSSH per-connection server daemon (185.213.165.81:48514). Jan 30 14:00:53.835249 sshd[7766]: Invalid user git from 185.213.165.81 port 48514 Jan 30 14:00:54.085011 sshd[7766]: Received disconnect from 185.213.165.81 port 48514:11: Bye Bye [preauth] Jan 30 14:00:54.085011 sshd[7766]: Disconnected from invalid user git 185.213.165.81 port 48514 [preauth] Jan 30 14:00:54.089846 systemd[1]: sshd@22-147.75.90.195:22-185.213.165.81:48514.service: Deactivated successfully. Jan 30 14:01:07.629177 systemd[1]: Started sshd@23-147.75.90.195:22-139.178.89.65:46634.service - OpenSSH per-connection server daemon (139.178.89.65:46634). Jan 30 14:01:07.688904 sshd[7801]: Accepted publickey for core from 139.178.89.65 port 46634 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:07.689782 sshd-session[7801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:07.693383 systemd-logind[1786]: New session 12 of user core. Jan 30 14:01:07.708622 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:01:07.839431 sshd[7803]: Connection closed by 139.178.89.65 port 46634 Jan 30 14:01:07.839670 sshd-session[7801]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:07.841264 systemd[1]: sshd@23-147.75.90.195:22-139.178.89.65:46634.service: Deactivated successfully. Jan 30 14:01:07.842243 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:01:07.842946 systemd-logind[1786]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:01:07.843435 systemd-logind[1786]: Removed session 12. Jan 30 14:01:12.852512 systemd[1]: Started sshd@24-147.75.90.195:22-139.178.89.65:40766.service - OpenSSH per-connection server daemon (139.178.89.65:40766). Jan 30 14:01:12.886345 sshd[7857]: Accepted publickey for core from 139.178.89.65 port 40766 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:12.887101 sshd-session[7857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:12.890240 systemd-logind[1786]: New session 13 of user core. Jan 30 14:01:12.905678 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:01:12.994957 sshd[7859]: Connection closed by 139.178.89.65 port 40766 Jan 30 14:01:12.995134 sshd-session[7857]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:12.996623 systemd[1]: sshd@24-147.75.90.195:22-139.178.89.65:40766.service: Deactivated successfully. Jan 30 14:01:12.997593 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:01:12.998263 systemd-logind[1786]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:01:12.998982 systemd-logind[1786]: Removed session 13. Jan 30 14:01:18.024629 systemd[1]: Started sshd@25-147.75.90.195:22-139.178.89.65:40774.service - OpenSSH per-connection server daemon (139.178.89.65:40774). Jan 30 14:01:18.059165 sshd[7906]: Accepted publickey for core from 139.178.89.65 port 40774 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:18.062447 sshd-session[7906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:18.073763 systemd-logind[1786]: New session 14 of user core. Jan 30 14:01:18.093913 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:01:18.185475 sshd[7908]: Connection closed by 139.178.89.65 port 40774 Jan 30 14:01:18.185684 sshd-session[7906]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:18.215253 systemd[1]: sshd@25-147.75.90.195:22-139.178.89.65:40774.service: Deactivated successfully. Jan 30 14:01:18.219772 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:01:18.223593 systemd-logind[1786]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:01:18.243226 systemd[1]: Started sshd@26-147.75.90.195:22-139.178.89.65:40776.service - OpenSSH per-connection server daemon (139.178.89.65:40776). Jan 30 14:01:18.245841 systemd-logind[1786]: Removed session 14. Jan 30 14:01:18.305272 sshd[7933]: Accepted publickey for core from 139.178.89.65 port 40776 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:18.306085 sshd-session[7933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:18.309446 systemd-logind[1786]: New session 15 of user core. Jan 30 14:01:18.324665 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:01:18.427188 sshd[7935]: Connection closed by 139.178.89.65 port 40776 Jan 30 14:01:18.427406 sshd-session[7933]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:18.440276 systemd[1]: sshd@26-147.75.90.195:22-139.178.89.65:40776.service: Deactivated successfully. Jan 30 14:01:18.441227 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:01:18.441877 systemd-logind[1786]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:01:18.442721 systemd[1]: Started sshd@27-147.75.90.195:22-139.178.89.65:40786.service - OpenSSH per-connection server daemon (139.178.89.65:40786). Jan 30 14:01:18.443176 systemd-logind[1786]: Removed session 15. Jan 30 14:01:18.477332 sshd[7957]: Accepted publickey for core from 139.178.89.65 port 40786 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:18.480681 sshd-session[7957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:18.492324 systemd-logind[1786]: New session 16 of user core. Jan 30 14:01:18.516016 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:01:18.655470 sshd[7959]: Connection closed by 139.178.89.65 port 40786 Jan 30 14:01:18.655597 sshd-session[7957]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:18.657080 systemd[1]: sshd@27-147.75.90.195:22-139.178.89.65:40786.service: Deactivated successfully. Jan 30 14:01:18.658013 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:01:18.658675 systemd-logind[1786]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:01:18.659225 systemd-logind[1786]: Removed session 16. Jan 30 14:01:23.676853 systemd[1]: Started sshd@28-147.75.90.195:22-139.178.89.65:51012.service - OpenSSH per-connection server daemon (139.178.89.65:51012). Jan 30 14:01:23.710366 sshd[7987]: Accepted publickey for core from 139.178.89.65 port 51012 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:23.711129 sshd-session[7987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:23.714339 systemd-logind[1786]: New session 17 of user core. Jan 30 14:01:23.729661 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:01:23.822659 sshd[7989]: Connection closed by 139.178.89.65 port 51012 Jan 30 14:01:23.822838 sshd-session[7987]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:23.824469 systemd[1]: sshd@28-147.75.90.195:22-139.178.89.65:51012.service: Deactivated successfully. Jan 30 14:01:23.825590 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:01:23.826433 systemd-logind[1786]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:01:23.827228 systemd-logind[1786]: Removed session 17. Jan 30 14:01:28.848739 systemd[1]: Started sshd@29-147.75.90.195:22-139.178.89.65:51014.service - OpenSSH per-connection server daemon (139.178.89.65:51014). Jan 30 14:01:28.880340 sshd[8033]: Accepted publickey for core from 139.178.89.65 port 51014 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:28.881094 sshd-session[8033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:28.884365 systemd-logind[1786]: New session 18 of user core. Jan 30 14:01:28.896670 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:01:28.985169 sshd[8035]: Connection closed by 139.178.89.65 port 51014 Jan 30 14:01:28.985331 sshd-session[8033]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:28.999133 systemd[1]: sshd@29-147.75.90.195:22-139.178.89.65:51014.service: Deactivated successfully. Jan 30 14:01:28.999984 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:01:29.000769 systemd-logind[1786]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:01:29.001687 systemd[1]: Started sshd@30-147.75.90.195:22-139.178.89.65:51030.service - OpenSSH per-connection server daemon (139.178.89.65:51030). Jan 30 14:01:29.002172 systemd-logind[1786]: Removed session 18. Jan 30 14:01:29.037369 sshd[8059]: Accepted publickey for core from 139.178.89.65 port 51030 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:29.040735 sshd-session[8059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:29.052494 systemd-logind[1786]: New session 19 of user core. Jan 30 14:01:29.073003 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:01:29.179700 sshd[8061]: Connection closed by 139.178.89.65 port 51030 Jan 30 14:01:29.179841 sshd-session[8059]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:29.213398 systemd[1]: sshd@30-147.75.90.195:22-139.178.89.65:51030.service: Deactivated successfully. Jan 30 14:01:29.217446 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:01:29.221110 systemd-logind[1786]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:01:29.241202 systemd[1]: Started sshd@31-147.75.90.195:22-139.178.89.65:51036.service - OpenSSH per-connection server daemon (139.178.89.65:51036). Jan 30 14:01:29.243807 systemd-logind[1786]: Removed session 19. Jan 30 14:01:29.302355 sshd[8082]: Accepted publickey for core from 139.178.89.65 port 51036 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:29.305623 sshd-session[8082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:29.317530 systemd-logind[1786]: New session 20 of user core. Jan 30 14:01:29.338849 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:01:30.139738 sshd[8086]: Connection closed by 139.178.89.65 port 51036 Jan 30 14:01:30.140680 sshd-session[8082]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:30.163672 systemd[1]: sshd@31-147.75.90.195:22-139.178.89.65:51036.service: Deactivated successfully. Jan 30 14:01:30.167178 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:01:30.169549 systemd-logind[1786]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:01:30.177845 systemd[1]: Started sshd@32-147.75.90.195:22-139.178.89.65:51044.service - OpenSSH per-connection server daemon (139.178.89.65:51044). Jan 30 14:01:30.178905 systemd-logind[1786]: Removed session 20. Jan 30 14:01:30.215483 sshd[8116]: Accepted publickey for core from 139.178.89.65 port 51044 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:30.216248 sshd-session[8116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:30.219169 systemd-logind[1786]: New session 21 of user core. Jan 30 14:01:30.227674 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:01:30.430466 sshd[8121]: Connection closed by 139.178.89.65 port 51044 Jan 30 14:01:30.430632 sshd-session[8116]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:30.455987 systemd[1]: sshd@32-147.75.90.195:22-139.178.89.65:51044.service: Deactivated successfully. Jan 30 14:01:30.460131 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:01:30.463676 systemd-logind[1786]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:01:30.486124 systemd[1]: Started sshd@33-147.75.90.195:22-139.178.89.65:51060.service - OpenSSH per-connection server daemon (139.178.89.65:51060). Jan 30 14:01:30.488623 systemd-logind[1786]: Removed session 21. Jan 30 14:01:30.548473 sshd[8143]: Accepted publickey for core from 139.178.89.65 port 51060 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:30.549415 sshd-session[8143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:30.552945 systemd-logind[1786]: New session 22 of user core. Jan 30 14:01:30.573692 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:01:30.695101 sshd[8146]: Connection closed by 139.178.89.65 port 51060 Jan 30 14:01:30.695233 sshd-session[8143]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:30.696979 systemd[1]: sshd@33-147.75.90.195:22-139.178.89.65:51060.service: Deactivated successfully. Jan 30 14:01:30.698093 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:01:30.698961 systemd-logind[1786]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:01:30.699758 systemd-logind[1786]: Removed session 22. Jan 30 14:01:35.719416 systemd[1]: Started sshd@34-147.75.90.195:22-139.178.89.65:59672.service - OpenSSH per-connection server daemon (139.178.89.65:59672). Jan 30 14:01:35.753624 sshd[8173]: Accepted publickey for core from 139.178.89.65 port 59672 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:35.757573 sshd-session[8173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:35.769341 systemd-logind[1786]: New session 23 of user core. Jan 30 14:01:35.782871 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:01:35.882666 sshd[8175]: Connection closed by 139.178.89.65 port 59672 Jan 30 14:01:35.883040 sshd-session[8173]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:35.885241 systemd[1]: sshd@34-147.75.90.195:22-139.178.89.65:59672.service: Deactivated successfully. Jan 30 14:01:35.886264 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:01:35.886817 systemd-logind[1786]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:01:35.887417 systemd-logind[1786]: Removed session 23. Jan 30 14:01:40.921389 systemd[1]: Started sshd@35-147.75.90.195:22-139.178.89.65:59678.service - OpenSSH per-connection server daemon (139.178.89.65:59678). Jan 30 14:01:40.978743 sshd[8228]: Accepted publickey for core from 139.178.89.65 port 59678 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:40.982185 sshd-session[8228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:40.993630 systemd-logind[1786]: New session 24 of user core. Jan 30 14:01:41.009812 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:01:41.159055 sshd[8230]: Connection closed by 139.178.89.65 port 59678 Jan 30 14:01:41.159234 sshd-session[8228]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:41.160880 systemd[1]: sshd@35-147.75.90.195:22-139.178.89.65:59678.service: Deactivated successfully. Jan 30 14:01:41.161846 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:01:41.162489 systemd-logind[1786]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:01:41.163177 systemd-logind[1786]: Removed session 24. Jan 30 14:01:41.511848 systemd[1]: Started sshd@36-147.75.90.195:22-20.220.16.23:54330.service - OpenSSH per-connection server daemon (20.220.16.23:54330). Jan 30 14:01:41.887407 sshd[8254]: Invalid user git from 20.220.16.23 port 54330 Jan 30 14:01:41.954233 sshd[8254]: Received disconnect from 20.220.16.23 port 54330:11: Bye Bye [preauth] Jan 30 14:01:41.954233 sshd[8254]: Disconnected from invalid user git 20.220.16.23 port 54330 [preauth] Jan 30 14:01:41.957554 systemd[1]: sshd@36-147.75.90.195:22-20.220.16.23:54330.service: Deactivated successfully. Jan 30 14:01:46.166679 systemd[1]: Started sshd@37-147.75.90.195:22-139.178.89.65:41248.service - OpenSSH per-connection server daemon (139.178.89.65:41248). Jan 30 14:01:46.203489 sshd[8259]: Accepted publickey for core from 139.178.89.65 port 41248 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 14:01:46.204478 sshd-session[8259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:46.208481 systemd-logind[1786]: New session 25 of user core. Jan 30 14:01:46.221764 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:01:46.308591 sshd[8261]: Connection closed by 139.178.89.65 port 41248 Jan 30 14:01:46.308744 sshd-session[8259]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:46.310313 systemd[1]: sshd@37-147.75.90.195:22-139.178.89.65:41248.service: Deactivated successfully. Jan 30 14:01:46.311258 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:01:46.312005 systemd-logind[1786]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:01:46.312647 systemd-logind[1786]: Removed session 25.