Jan 30 13:22:02.468604 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 30 13:22:02.468618 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:22:02.468625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:22:02.468631 kernel: BIOS-provided physical RAM map: Jan 30 13:22:02.468635 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 13:22:02.468639 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 13:22:02.468644 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 13:22:02.468648 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 13:22:02.468652 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 13:22:02.468656 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Jan 30 13:22:02.468661 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Jan 30 13:22:02.468665 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Jan 30 13:22:02.468670 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Jan 30 13:22:02.468674 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 13:22:02.468680 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 13:22:02.468684 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 13:22:02.468690 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 13:22:02.468695 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 13:22:02.468699 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 13:22:02.468704 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:22:02.468709 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 13:22:02.468713 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 13:22:02.468718 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:22:02.468723 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 13:22:02.468727 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 13:22:02.468732 kernel: NX (Execute Disable) protection: active Jan 30 13:22:02.468737 kernel: APIC: Static calls initialized Jan 30 13:22:02.468741 kernel: SMBIOS 3.2.1 present. Jan 30 13:22:02.468747 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 13:22:02.468752 kernel: tsc: Detected 3400.000 MHz processor Jan 30 13:22:02.468756 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 13:22:02.468761 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:22:02.468766 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:22:02.468771 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 13:22:02.468776 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 13:22:02.468781 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:22:02.468786 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 13:22:02.468791 kernel: Using GB pages for direct mapping Jan 30 13:22:02.468797 kernel: ACPI: Early table checksum verification disabled Jan 30 13:22:02.468802 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 13:22:02.468809 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 13:22:02.468814 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 13:22:02.468819 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 13:22:02.468824 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 13:22:02.468830 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 13:22:02.468835 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 13:22:02.468840 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 13:22:02.468845 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 13:22:02.468851 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 13:22:02.468856 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 13:22:02.468861 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 13:22:02.468866 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 13:22:02.468872 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468877 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 13:22:02.468882 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 13:22:02.468887 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468892 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468897 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 13:22:02.468903 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 13:22:02.468908 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468914 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468919 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 13:22:02.468924 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:22:02.468929 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 13:22:02.468934 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 13:22:02.468939 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 13:22:02.468944 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 13:22:02.468949 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 13:22:02.468955 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 13:22:02.468961 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 13:22:02.468966 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 13:22:02.468971 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 13:22:02.468976 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 13:22:02.468981 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 13:22:02.468986 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 13:22:02.468991 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 13:22:02.468996 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 13:22:02.469002 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 13:22:02.469007 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 13:22:02.469012 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 13:22:02.469017 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 13:22:02.469022 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 13:22:02.469027 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 13:22:02.469032 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 13:22:02.469037 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 13:22:02.469042 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 13:22:02.469047 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 13:22:02.469053 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 13:22:02.469059 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 13:22:02.469064 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 13:22:02.469069 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 13:22:02.469074 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 13:22:02.469079 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 13:22:02.469084 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 13:22:02.469089 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 13:22:02.469094 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 13:22:02.469100 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 13:22:02.469105 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 13:22:02.469110 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 13:22:02.469115 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 13:22:02.469120 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 13:22:02.469125 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 13:22:02.469130 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 13:22:02.469135 kernel: No NUMA configuration found Jan 30 13:22:02.469140 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 13:22:02.469146 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 13:22:02.469151 kernel: Zone ranges: Jan 30 13:22:02.469156 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:22:02.469161 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:22:02.469166 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:22:02.469171 kernel: Movable zone start for each node Jan 30 13:22:02.469176 kernel: Early memory node ranges Jan 30 13:22:02.469181 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 13:22:02.469186 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 13:22:02.469191 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Jan 30 13:22:02.469197 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Jan 30 13:22:02.469202 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 13:22:02.469208 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 13:22:02.469216 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:22:02.469222 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 13:22:02.469228 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:22:02.469233 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 13:22:02.469240 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 13:22:02.469245 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 13:22:02.469250 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 13:22:02.469256 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 13:22:02.469261 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 13:22:02.469267 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 13:22:02.469272 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 13:22:02.469277 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:22:02.469283 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:22:02.469288 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:22:02.469294 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:22:02.469300 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:22:02.469305 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:22:02.469311 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:22:02.469316 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:22:02.469321 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:22:02.469326 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:22:02.469332 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:22:02.469337 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:22:02.469343 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:22:02.469349 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:22:02.469354 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:22:02.469359 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:22:02.469365 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 13:22:02.469370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:22:02.469375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:22:02.469381 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:22:02.469386 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:22:02.469393 kernel: TSC deadline timer available Jan 30 13:22:02.469398 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 13:22:02.469404 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 13:22:02.469409 kernel: Booting paravirtualized kernel on bare hardware Jan 30 13:22:02.469415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:22:02.469420 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 13:22:02.469426 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:22:02.469431 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:22:02.469437 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 13:22:02.469444 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:22:02.469450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:22:02.469455 kernel: random: crng init done Jan 30 13:22:02.469460 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 13:22:02.469466 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 13:22:02.469471 kernel: Fallback order for Node 0: 0 Jan 30 13:22:02.469479 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 13:22:02.469485 kernel: Policy zone: Normal Jan 30 13:22:02.469491 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:22:02.469496 kernel: software IO TLB: area num 16. Jan 30 13:22:02.469502 kernel: Memory: 32718264K/33452980K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734456K reserved, 0K cma-reserved) Jan 30 13:22:02.469508 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 13:22:02.469513 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:22:02.469518 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:22:02.469524 kernel: Dynamic Preempt: voluntary Jan 30 13:22:02.469529 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:22:02.469535 kernel: rcu: RCU event tracing is enabled. Jan 30 13:22:02.469542 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 13:22:02.469547 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:22:02.469553 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:22:02.469558 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:22:02.469564 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:22:02.469569 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 13:22:02.469574 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 13:22:02.469580 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:22:02.469585 kernel: Console: colour VGA+ 80x25 Jan 30 13:22:02.469591 kernel: printk: console [tty0] enabled Jan 30 13:22:02.469597 kernel: printk: console [ttyS1] enabled Jan 30 13:22:02.469602 kernel: ACPI: Core revision 20230628 Jan 30 13:22:02.469608 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 13:22:02.469613 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:22:02.469618 kernel: DMAR: Host address width 39 Jan 30 13:22:02.469624 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 13:22:02.469629 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 13:22:02.469635 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 13:22:02.469640 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 13:22:02.469646 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 13:22:02.469652 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 13:22:02.469657 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 13:22:02.469662 kernel: x2apic enabled Jan 30 13:22:02.469668 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 13:22:02.469673 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 13:22:02.469679 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 13:22:02.469684 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 13:22:02.469691 kernel: process: using mwait in idle threads Jan 30 13:22:02.469696 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:22:02.469701 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:22:02.469707 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:22:02.469712 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:22:02.469717 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:22:02.469723 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:22:02.469728 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:22:02.469733 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:22:02.469739 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:22:02.469744 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:22:02.469750 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:22:02.469756 kernel: TAA: Mitigation: TSX disabled Jan 30 13:22:02.469761 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 13:22:02.469766 kernel: SRBDS: Mitigation: Microcode Jan 30 13:22:02.469772 kernel: GDS: Mitigation: Microcode Jan 30 13:22:02.469777 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:22:02.469782 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:22:02.469788 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:22:02.469793 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:22:02.469798 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:22:02.469804 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:22:02.469810 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:22:02.469815 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:22:02.469821 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 13:22:02.469826 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:22:02.469831 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:22:02.469837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:22:02.469842 kernel: landlock: Up and running. Jan 30 13:22:02.469848 kernel: SELinux: Initializing. Jan 30 13:22:02.469853 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.469858 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.469864 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:22:02.469869 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:22:02.469876 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:22:02.469881 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:22:02.469887 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 13:22:02.469892 kernel: ... version: 4 Jan 30 13:22:02.469897 kernel: ... bit width: 48 Jan 30 13:22:02.469903 kernel: ... generic registers: 4 Jan 30 13:22:02.469908 kernel: ... value mask: 0000ffffffffffff Jan 30 13:22:02.469914 kernel: ... max period: 00007fffffffffff Jan 30 13:22:02.469919 kernel: ... fixed-purpose events: 3 Jan 30 13:22:02.469925 kernel: ... event mask: 000000070000000f Jan 30 13:22:02.469931 kernel: signal: max sigframe size: 2032 Jan 30 13:22:02.469936 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 13:22:02.469942 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:22:02.469947 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:22:02.469952 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 13:22:02.469958 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:22:02.469963 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:22:02.469969 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 13:22:02.469975 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:22:02.469981 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 13:22:02.469986 kernel: smpboot: Max logical packages: 1 Jan 30 13:22:02.469992 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 13:22:02.469997 kernel: devtmpfs: initialized Jan 30 13:22:02.470002 kernel: x86/mm: Memory block size: 128MB Jan 30 13:22:02.470008 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Jan 30 13:22:02.470013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 13:22:02.470020 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:22:02.470025 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 13:22:02.470030 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:22:02.470036 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:22:02.470041 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:22:02.470046 kernel: audit: type=2000 audit(1738243317.042:1): state=initialized audit_enabled=0 res=1 Jan 30 13:22:02.470052 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:22:02.470057 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:22:02.470063 kernel: cpuidle: using governor menu Jan 30 13:22:02.470069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:22:02.470075 kernel: dca service started, version 1.12.1 Jan 30 13:22:02.470080 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:22:02.470086 kernel: PCI: Using configuration type 1 for base access Jan 30 13:22:02.470091 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 13:22:02.470096 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:22:02.470102 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:22:02.470107 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:22:02.470113 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:22:02.470119 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:22:02.470125 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:22:02.470130 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:22:02.470135 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:22:02.470141 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:22:02.470146 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 13:22:02.470151 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470157 kernel: ACPI: SSDT 0xFFFF9CEEC1602800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 13:22:02.470162 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470169 kernel: ACPI: SSDT 0xFFFF9CEEC15FB800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 13:22:02.470174 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470180 kernel: ACPI: SSDT 0xFFFF9CEEC15E5600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 13:22:02.470185 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470190 kernel: ACPI: SSDT 0xFFFF9CEEC15FB000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 13:22:02.470196 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470201 kernel: ACPI: SSDT 0xFFFF9CEEC160E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 13:22:02.470206 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470212 kernel: ACPI: SSDT 0xFFFF9CEEC0EE1800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 13:22:02.470217 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 13:22:02.470223 kernel: ACPI: Interpreter enabled Jan 30 13:22:02.470229 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:22:02.470234 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:22:02.470240 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 13:22:02.470245 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 13:22:02.470250 kernel: HEST: Table parsing has been initialized. Jan 30 13:22:02.470256 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 13:22:02.470261 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:22:02.470267 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:22:02.470273 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 13:22:02.470279 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 13:22:02.470284 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 13:22:02.470290 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 13:22:02.470295 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 13:22:02.470301 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 13:22:02.470306 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 13:22:02.470312 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 13:22:02.470317 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 13:22:02.470324 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 13:22:02.470329 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 13:22:02.470335 kernel: ACPI: \PIN_: New power resource Jan 30 13:22:02.470340 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 13:22:02.470411 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:22:02.470464 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 13:22:02.470513 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 13:22:02.470524 kernel: PCI host bridge to bus 0000:00 Jan 30 13:22:02.470572 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:22:02.470615 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:22:02.470656 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:22:02.470697 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 13:22:02.470738 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 13:22:02.470778 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 13:22:02.470838 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 13:22:02.470893 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 13:22:02.470943 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.470997 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 13:22:02.471045 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 13:22:02.471096 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 13:22:02.471146 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 13:22:02.471199 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 13:22:02.471246 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 13:22:02.471294 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 13:22:02.471345 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 13:22:02.471393 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 13:22:02.471441 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 13:22:02.471495 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 13:22:02.471542 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:22:02.471596 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 13:22:02.471643 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:22:02.471694 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 13:22:02.471743 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 13:22:02.471793 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 13:22:02.471849 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 13:22:02.471899 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 13:22:02.471946 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 13:22:02.471997 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 13:22:02.472045 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 13:22:02.472094 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 13:22:02.472145 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 13:22:02.472193 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 13:22:02.472240 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 13:22:02.472286 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 13:22:02.472333 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 13:22:02.472379 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 13:22:02.472429 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 13:22:02.472475 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 13:22:02.472572 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 13:22:02.472620 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472678 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 13:22:02.472729 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472781 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 13:22:02.472830 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472881 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 13:22:02.472929 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472982 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 13:22:02.473031 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.473081 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 13:22:02.473129 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:22:02.473179 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 13:22:02.473231 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 13:22:02.473280 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 13:22:02.473328 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 13:22:02.473382 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 13:22:02.473430 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 13:22:02.473487 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 13:22:02.473571 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 13:22:02.473620 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 13:22:02.473670 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 13:22:02.473720 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:22:02.473769 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:22:02.473822 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 13:22:02.473871 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 13:22:02.473922 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 13:22:02.473970 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 13:22:02.474020 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:22:02.474069 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:22:02.474117 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:22:02.474165 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:22:02.474212 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:22:02.474260 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:22:02.474313 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 13:22:02.474365 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:22:02.474413 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 13:22:02.474461 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 13:22:02.474513 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 13:22:02.474562 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.474610 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:22:02.474656 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:22:02.474706 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:22:02.474758 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 13:22:02.474809 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:22:02.474858 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 13:22:02.474907 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 13:22:02.474956 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 13:22:02.475004 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.475052 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:22:02.475101 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:22:02.475149 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:22:02.475196 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:22:02.475251 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 13:22:02.475300 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 13:22:02.475349 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 13:22:02.475397 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:22:02.475448 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:22:02.475551 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.475598 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.475652 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 13:22:02.475707 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 13:22:02.475759 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 13:22:02.475809 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 13:22:02.475863 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 13:22:02.475913 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:22:02.475964 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 13:22:02.476015 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:22:02.476064 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:22:02.476113 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.476160 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.476168 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 13:22:02.476176 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 13:22:02.476182 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 13:22:02.476188 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 13:22:02.476193 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 13:22:02.476199 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 13:22:02.476205 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 13:22:02.476210 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 13:22:02.476216 kernel: iommu: Default domain type: Translated Jan 30 13:22:02.476222 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:22:02.476229 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:22:02.476234 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:22:02.476240 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 13:22:02.476246 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Jan 30 13:22:02.476251 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 13:22:02.476257 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 13:22:02.476262 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 13:22:02.476268 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 13:22:02.476317 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 13:22:02.476371 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 13:22:02.476420 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:22:02.476428 kernel: vgaarb: loaded Jan 30 13:22:02.476436 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:22:02.476442 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:22:02.476447 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:22:02.476453 kernel: pnp: PnP ACPI init Jan 30 13:22:02.476525 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 13:22:02.476588 kernel: pnp 00:02: [dma 0 disabled] Jan 30 13:22:02.476638 kernel: pnp 00:03: [dma 0 disabled] Jan 30 13:22:02.476686 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 13:22:02.476731 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 13:22:02.476777 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 13:22:02.476823 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 13:22:02.476870 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 13:22:02.476913 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 13:22:02.476956 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 13:22:02.477001 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 13:22:02.477045 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 13:22:02.477088 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 13:22:02.477132 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 13:22:02.477181 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 13:22:02.477226 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 13:22:02.477268 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 13:22:02.477311 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 13:22:02.477354 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 13:22:02.477397 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 13:22:02.477440 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 13:22:02.477562 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 13:22:02.477571 kernel: pnp: PnP ACPI: found 10 devices Jan 30 13:22:02.477577 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:22:02.477583 kernel: NET: Registered PF_INET protocol family Jan 30 13:22:02.477589 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:02.477595 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 13:22:02.477600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:22:02.477606 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:02.477614 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:22:02.477620 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 13:22:02.477626 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.477632 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.477637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:22:02.477643 kernel: NET: Registered PF_XDP protocol family Jan 30 13:22:02.477692 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 13:22:02.477740 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 13:22:02.477788 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 13:22:02.477840 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:22:02.477889 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:22:02.477939 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:22:02.477988 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:22:02.478037 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:22:02.478084 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:22:02.478131 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:22:02.478179 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:22:02.478229 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:22:02.478276 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:22:02.478324 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:22:02.478371 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:22:02.478422 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:22:02.478468 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:22:02.478554 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:22:02.478603 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:22:02.478652 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.478700 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.478747 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:22:02.478796 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.478843 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.478889 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 13:22:02.478931 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:22:02.478974 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:22:02.479015 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:22:02.479060 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 13:22:02.479101 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 13:22:02.479149 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 13:22:02.479192 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:22:02.479246 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 13:22:02.479289 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 13:22:02.479337 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 13:22:02.479380 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 13:22:02.479428 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 13:22:02.479472 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:22:02.479523 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 13:22:02.479569 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:22:02.479578 kernel: PCI: CLS 64 bytes, default 64 Jan 30 13:22:02.479584 kernel: DMAR: No ATSR found Jan 30 13:22:02.479589 kernel: DMAR: No SATC found Jan 30 13:22:02.479595 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 13:22:02.479642 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 13:22:02.479690 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 13:22:02.479740 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 13:22:02.479789 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 13:22:02.479835 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 13:22:02.479882 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 13:22:02.479928 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 13:22:02.479975 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 13:22:02.480022 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 13:22:02.480070 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 13:22:02.480119 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 13:22:02.480165 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 13:22:02.480212 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 13:22:02.480258 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 13:22:02.480305 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 13:22:02.480351 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 13:22:02.480397 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 13:22:02.480443 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 13:22:02.480495 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 13:22:02.480542 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 13:22:02.480589 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 13:22:02.480636 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 13:22:02.480685 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 13:22:02.480734 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 13:22:02.480783 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 13:22:02.480832 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 13:22:02.480884 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 13:22:02.480892 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 13:22:02.480898 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:22:02.480904 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 13:22:02.480910 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 13:22:02.480916 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 13:22:02.480922 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 13:22:02.480927 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 13:22:02.480977 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 13:22:02.480988 kernel: Initialise system trusted keyrings Jan 30 13:22:02.480994 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 13:22:02.480999 kernel: Key type asymmetric registered Jan 30 13:22:02.481005 kernel: Asymmetric key parser 'x509' registered Jan 30 13:22:02.481011 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:22:02.481016 kernel: io scheduler mq-deadline registered Jan 30 13:22:02.481022 kernel: io scheduler kyber registered Jan 30 13:22:02.481028 kernel: io scheduler bfq registered Jan 30 13:22:02.481074 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 13:22:02.481124 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 13:22:02.481172 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 13:22:02.481219 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 13:22:02.481265 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 13:22:02.481312 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 13:22:02.481364 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 13:22:02.481373 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 13:22:02.481380 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 13:22:02.481386 kernel: pstore: Using crash dump compression: deflate Jan 30 13:22:02.481392 kernel: pstore: Registered erst as persistent store backend Jan 30 13:22:02.481398 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:22:02.481403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:22:02.481409 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:22:02.481415 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:22:02.481421 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 13:22:02.481470 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 13:22:02.481485 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:22:02.481559 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 13:22:02.481604 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 13:22:02.481647 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T13:22:01 UTC (1738243321) Jan 30 13:22:02.481691 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:22:02.481699 kernel: intel_pstate: Intel P-state driver initializing Jan 30 13:22:02.481705 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 13:22:02.481713 kernel: intel_pstate: HWP enabled Jan 30 13:22:02.481719 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:22:02.481724 kernel: Segment Routing with IPv6 Jan 30 13:22:02.481730 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:22:02.481736 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:22:02.481742 kernel: Key type dns_resolver registered Jan 30 13:22:02.481747 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 13:22:02.481753 kernel: IPI shorthand broadcast: enabled Jan 30 13:22:02.481759 kernel: sched_clock: Marking stable (2490262503, 1448893929)->(4502124522, -562968090) Jan 30 13:22:02.481766 kernel: registered taskstats version 1 Jan 30 13:22:02.481772 kernel: Loading compiled-in X.509 certificates Jan 30 13:22:02.481777 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:22:02.481783 kernel: Key type .fscrypt registered Jan 30 13:22:02.481788 kernel: Key type fscrypt-provisioning registered Jan 30 13:22:02.481794 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:22:02.481800 kernel: ima: No architecture policies found Jan 30 13:22:02.481806 kernel: clk: Disabling unused clocks Jan 30 13:22:02.481812 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:22:02.481818 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:22:02.481824 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:22:02.481830 kernel: Run /init as init process Jan 30 13:22:02.481836 kernel: with arguments: Jan 30 13:22:02.481841 kernel: /init Jan 30 13:22:02.481847 kernel: with environment: Jan 30 13:22:02.481852 kernel: HOME=/ Jan 30 13:22:02.481858 kernel: TERM=linux Jan 30 13:22:02.481864 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:22:02.481872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:02.481879 systemd[1]: Detected architecture x86-64. Jan 30 13:22:02.481885 systemd[1]: Running in initrd. Jan 30 13:22:02.481891 systemd[1]: No hostname configured, using default hostname. Jan 30 13:22:02.481896 systemd[1]: Hostname set to . Jan 30 13:22:02.481902 systemd[1]: Initializing machine ID from random generator. Jan 30 13:22:02.481908 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:22:02.481915 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:02.481921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:02.481928 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:22:02.481934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:02.481940 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:22:02.481947 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:22:02.481953 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:22:02.481961 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:22:02.481967 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:02.481973 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:02.481979 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:02.481985 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:02.481991 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:02.481997 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:02.482003 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:02.482010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:02.482016 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:22:02.482022 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:22:02.482029 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:02.482035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:02.482041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:02.482047 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:02.482053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:22:02.482059 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 30 13:22:02.482065 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 30 13:22:02.482071 kernel: clocksource: Switched to clocksource tsc Jan 30 13:22:02.482077 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:02.482083 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:22:02.482089 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:22:02.482095 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:02.482111 systemd-journald[266]: Collecting audit messages is disabled. Jan 30 13:22:02.482127 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:02.482134 systemd-journald[266]: Journal started Jan 30 13:22:02.482149 systemd-journald[266]: Runtime Journal (/run/log/journal/b0a96cb629844946b8064d7abe21010f) is 8.0M, max 639.9M, 631.9M free. Jan 30 13:22:02.485445 systemd-modules-load[267]: Inserted module 'overlay' Jan 30 13:22:02.502580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:02.525503 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:22:02.525520 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:02.532678 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:02.532770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:02.532944 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:22:02.539468 systemd-modules-load[267]: Inserted module 'br_netfilter' Jan 30 13:22:02.539554 kernel: Bridge firewalling registered Jan 30 13:22:02.543752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:02.559963 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:02.620146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:02.640593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:02.669873 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:02.691792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:02.726803 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:02.747817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:02.751569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:02.766952 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:02.767847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:02.769978 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:02.781718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:02.786550 systemd-resolved[300]: Positive Trust Anchors: Jan 30 13:22:02.786555 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:02.786581 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:02.788313 systemd-resolved[300]: Defaulting to hostname 'linux'. Jan 30 13:22:02.793776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:02.814717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:02.830704 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:22:02.954790 dracut-cmdline[312]: dracut-dracut-053 Jan 30 13:22:02.961717 dracut-cmdline[312]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:22:03.139502 kernel: SCSI subsystem initialized Jan 30 13:22:03.151511 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:22:03.164529 kernel: iscsi: registered transport (tcp) Jan 30 13:22:03.185183 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:22:03.185200 kernel: QLogic iSCSI HBA Driver Jan 30 13:22:03.208096 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:03.231734 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:22:03.270213 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:22:03.270251 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:22:03.279488 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:22:03.317542 kernel: raid6: avx2x4 gen() 42911 MB/s Jan 30 13:22:03.338548 kernel: raid6: avx2x2 gen() 52675 MB/s Jan 30 13:22:03.364630 kernel: raid6: avx2x1 gen() 45106 MB/s Jan 30 13:22:03.364646 kernel: raid6: using algorithm avx2x2 gen() 52675 MB/s Jan 30 13:22:03.391709 kernel: raid6: .... xor() 32492 MB/s, rmw enabled Jan 30 13:22:03.391726 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:22:03.412483 kernel: xor: automatically using best checksumming function avx Jan 30 13:22:03.509523 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:22:03.515687 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:03.539790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:03.547182 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jan 30 13:22:03.550599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:03.577619 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:22:03.628643 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 30 13:22:03.645803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:03.666765 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:03.753376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:03.780031 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:22:03.780072 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:22:03.780083 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:22:03.791485 kernel: libata version 3.00 loaded. Jan 30 13:22:03.798487 kernel: PTP clock support registered Jan 30 13:22:03.798523 kernel: ACPI: bus type USB registered Jan 30 13:22:03.809023 kernel: usbcore: registered new interface driver usbfs Jan 30 13:22:03.814513 kernel: usbcore: registered new interface driver hub Jan 30 13:22:03.814574 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:22:03.814614 kernel: usbcore: registered new device driver usb Jan 30 13:22:03.825482 kernel: AES CTR mode by8 optimization enabled Jan 30 13:22:03.831489 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 13:22:04.066628 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 13:22:04.066713 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 13:22:04.066781 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 13:22:04.066791 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 13:22:04.066798 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:22:04.066864 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 13:22:04.066927 kernel: scsi host0: ahci Jan 30 13:22:04.066992 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 13:22:04.067058 kernel: scsi host1: ahci Jan 30 13:22:04.067122 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:22:04.067185 kernel: scsi host2: ahci Jan 30 13:22:04.067245 kernel: pps pps0: new PPS source ptp0 Jan 30 13:22:04.067306 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 13:22:04.067373 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:22:04.067438 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:54 Jan 30 13:22:04.067512 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 13:22:04.067578 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:22:04.067642 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 13:22:04.067704 kernel: scsi host3: ahci Jan 30 13:22:04.067765 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 13:22:04.067827 kernel: scsi host4: ahci Jan 30 13:22:04.067886 kernel: hub 1-0:1.0: USB hub found Jan 30 13:22:04.067957 kernel: scsi host5: ahci Jan 30 13:22:04.068017 kernel: pps pps1: new PPS source ptp1 Jan 30 13:22:04.068076 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 13:22:04.068142 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:22:04.068205 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:55 Jan 30 13:22:04.068268 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 13:22:04.068330 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:22:04.068395 kernel: hub 1-0:1.0: 16 ports detected Jan 30 13:22:04.068456 kernel: scsi host6: ahci Jan 30 13:22:04.068521 kernel: hub 2-0:1.0: USB hub found Jan 30 13:22:04.068585 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 13:22:04.068593 kernel: hub 2-0:1.0: 10 ports detected Jan 30 13:22:04.068652 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 13:22:04.068660 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 13:22:04.068725 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 13:22:04.068734 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 13:22:04.068741 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 13:22:04.068749 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 13:22:04.068756 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 13:22:03.838769 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:22:03.874761 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:04.146620 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 30 13:22:04.570111 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:22:04.570196 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 13:22:04.570270 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 13:22:04.786846 kernel: hub 1-14:1.0: USB hub found Jan 30 13:22:04.786942 kernel: hub 1-14:1.0: 4 ports detected Jan 30 13:22:04.787019 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:22:04.787093 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787102 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 13:22:04.787168 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787176 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787184 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787192 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:22:04.787199 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:22:04.787206 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787216 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:22:04.787223 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:22:04.787231 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:22:04.787238 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:22:04.787246 kernel: ata2.00: Features: NCQ-prio Jan 30 13:22:04.787253 kernel: ata1.00: Features: NCQ-prio Jan 30 13:22:04.787261 kernel: ata2.00: configured for UDMA/133 Jan 30 13:22:04.787268 kernel: ata1.00: configured for UDMA/133 Jan 30 13:22:04.787275 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:22:04.787348 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:22:04.787413 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:22:04.787421 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:04.787429 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:22:04.787494 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:22:04.787556 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:22:04.787619 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 30 13:22:04.787684 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 13:22:04.787746 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 30 13:22:04.787808 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 13:22:04.787868 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:22:04.787928 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:22:04.787995 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 30 13:22:05.081818 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:22:05.082212 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 13:22:05.082565 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 13:22:05.082873 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:22:05.083199 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:22:05.083268 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 13:22:05.083660 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 13:22:05.084046 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:05.084113 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:22:05.084193 kernel: GPT:9289727 != 937703087 Jan 30 13:22:05.084256 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:22:05.084315 kernel: GPT:9289727 != 937703087 Jan 30 13:22:05.084375 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:22:05.084435 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 13:22:05.084513 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 30 13:22:05.085001 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 13:22:05.085791 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by (udev-worker) (550) Jan 30 13:22:05.085881 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sdb3 scanned by (udev-worker) (553) Jan 30 13:22:05.085950 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:22:05.086015 kernel: usbcore: registered new interface driver usbhid Jan 30 13:22:05.086077 kernel: usbhid: USB HID core driver Jan 30 13:22:05.086139 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 13:22:05.086199 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:22:05.086621 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:05.086669 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 13:22:05.087026 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 13:22:05.087085 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 13:22:05.087496 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 13:22:05.087550 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 13:22:05.087924 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:22:05.088283 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 13:22:04.004059 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:05.110739 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 13:22:04.136669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:04.157637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:04.167606 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:04.167677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:04.178612 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:05.166742 disk-uuid[703]: Primary Header is updated. Jan 30 13:22:05.166742 disk-uuid[703]: Secondary Entries is updated. Jan 30 13:22:05.166742 disk-uuid[703]: Secondary Header is updated. Jan 30 13:22:04.195634 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:22:04.211582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:04.211653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:04.222585 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:04.239630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:04.249831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:04.272242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:04.301628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:04.310687 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:04.726817 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 30 13:22:04.756380 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 30 13:22:04.774476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 13:22:04.790671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:22:04.827625 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:22:04.855590 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:22:05.885592 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:05.893378 disk-uuid[704]: The operation has completed successfully. Jan 30 13:22:05.901654 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 13:22:05.927922 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:22:05.928020 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:22:05.973785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:22:05.999610 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:22:05.999668 sh[728]: Success Jan 30 13:22:06.032803 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:22:06.050618 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:22:06.061894 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:22:06.117864 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:22:06.117886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:06.127493 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:22:06.134555 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:22:06.140384 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:22:06.154529 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:22:06.156869 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:22:06.166864 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:22:06.180969 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:22:06.193780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:22:06.229312 kernel: BTRFS info (device sdb6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:06.229331 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:06.236340 kernel: BTRFS info (device sdb6): using free space tree Jan 30 13:22:06.254117 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 13:22:06.254133 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 13:22:06.259735 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:22:06.282750 kernel: BTRFS info (device sdb6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:06.273182 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:22:06.293253 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:22:06.341797 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:06.357725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:06.374600 systemd-networkd[911]: lo: Link UP Jan 30 13:22:06.374604 systemd-networkd[911]: lo: Gained carrier Jan 30 13:22:06.376996 systemd-networkd[911]: Enumeration completed Jan 30 13:22:06.389019 ignition[805]: Ignition 2.20.0 Jan 30 13:22:06.377599 systemd-networkd[911]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.389024 ignition[805]: Stage: fetch-offline Jan 30 13:22:06.380605 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:06.389045 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:06.391286 unknown[805]: fetched base config from "system" Jan 30 13:22:06.389050 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:06.391293 unknown[805]: fetched user config from "system" Jan 30 13:22:06.389103 ignition[805]: parsed url from cmdline: "" Jan 30 13:22:06.404399 systemd-networkd[911]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.389105 ignition[805]: no config URL provided Jan 30 13:22:06.404735 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:06.389107 ignition[805]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:06.423089 systemd[1]: Reached target network.target - Network. Jan 30 13:22:06.389130 ignition[805]: parsing config with SHA512: 81cabcb72845019c0732f687dc04a50b3b531485dec95a7c6f48049a2c8200b44625f237907d33eca8c69a544762cd387f8ec5f38a50a84ef2a5220a1402b4cc Jan 30 13:22:06.432899 systemd-networkd[911]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.391515 ignition[805]: fetch-offline: fetch-offline passed Jan 30 13:22:06.437615 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:22:06.391519 ignition[805]: POST message to Packet Timeline Jan 30 13:22:06.606653 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 13:22:06.450711 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:22:06.391523 ignition[805]: POST Status error: resource requires networking Jan 30 13:22:06.600998 systemd-networkd[911]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.391575 ignition[805]: Ignition finished successfully Jan 30 13:22:06.464568 ignition[923]: Ignition 2.20.0 Jan 30 13:22:06.464573 ignition[923]: Stage: kargs Jan 30 13:22:06.464672 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:06.464678 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:06.465150 ignition[923]: kargs: kargs passed Jan 30 13:22:06.465153 ignition[923]: POST message to Packet Timeline Jan 30 13:22:06.465164 ignition[923]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:06.465507 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59409->[::1]:53: read: connection refused Jan 30 13:22:06.666049 ignition[923]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 13:22:06.667019 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37521->[::1]:53: read: connection refused Jan 30 13:22:06.809513 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 13:22:06.810204 systemd-networkd[911]: eno1: Link UP Jan 30 13:22:06.810364 systemd-networkd[911]: eno2: Link UP Jan 30 13:22:06.810516 systemd-networkd[911]: enp1s0f0np0: Link UP Jan 30 13:22:06.810688 systemd-networkd[911]: enp1s0f0np0: Gained carrier Jan 30 13:22:06.823795 systemd-networkd[911]: enp1s0f1np1: Link UP Jan 30 13:22:06.859746 systemd-networkd[911]: enp1s0f0np0: DHCPv4 address 147.75.90.199/31, gateway 147.75.90.198 acquired from 145.40.83.140 Jan 30 13:22:07.067589 ignition[923]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 13:22:07.068737 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43889->[::1]:53: read: connection refused Jan 30 13:22:07.611285 systemd-networkd[911]: enp1s0f1np1: Gained carrier Jan 30 13:22:07.869741 ignition[923]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 13:22:07.870745 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52925->[::1]:53: read: connection refused Jan 30 13:22:08.571081 systemd-networkd[911]: enp1s0f0np0: Gained IPv6LL Jan 30 13:22:09.211099 systemd-networkd[911]: enp1s0f1np1: Gained IPv6LL Jan 30 13:22:09.471644 ignition[923]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 13:22:09.472666 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49541->[::1]:53: read: connection refused Jan 30 13:22:12.675443 ignition[923]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 13:22:13.122284 ignition[923]: GET result: OK Jan 30 13:22:13.518853 ignition[923]: Ignition finished successfully Jan 30 13:22:13.524742 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:22:13.553739 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:22:13.559823 ignition[943]: Ignition 2.20.0 Jan 30 13:22:13.559828 ignition[943]: Stage: disks Jan 30 13:22:13.559938 ignition[943]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:13.559945 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:13.560470 ignition[943]: disks: disks passed Jan 30 13:22:13.560473 ignition[943]: POST message to Packet Timeline Jan 30 13:22:13.560488 ignition[943]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:14.343261 ignition[943]: GET result: OK Jan 30 13:22:14.695789 ignition[943]: Ignition finished successfully Jan 30 13:22:14.697651 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:22:14.713765 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:14.732920 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:22:14.753909 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:14.775801 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:14.795792 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:14.834721 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:22:14.868842 systemd-fsck[959]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:22:14.878951 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:22:14.912678 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:22:14.984376 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:22:15.001708 kernel: EXT4-fs (sdb9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:22:14.993909 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:15.024679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:15.071521 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sdb6 scanned by mount (968) Jan 30 13:22:15.071535 kernel: BTRFS info (device sdb6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:15.071546 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:15.071553 kernel: BTRFS info (device sdb6): using free space tree Jan 30 13:22:15.033946 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:22:15.101712 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 13:22:15.101725 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 13:22:15.093755 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:22:15.128581 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 13:22:15.139571 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:22:15.139589 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:15.194650 coreos-metadata[985]: Jan 30 13:22:15.159 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:22:15.214671 coreos-metadata[986]: Jan 30 13:22:15.159 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:22:15.159615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:15.184774 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:22:15.213761 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:22:15.264552 initrd-setup-root[1000]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:22:15.274603 initrd-setup-root[1007]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:22:15.284595 initrd-setup-root[1014]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:22:15.295594 initrd-setup-root[1021]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:22:15.302986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:15.325755 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:22:15.350698 kernel: BTRFS info (device sdb6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:15.326375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:22:15.359490 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:22:15.380923 ignition[1088]: INFO : Ignition 2.20.0 Jan 30 13:22:15.380923 ignition[1088]: INFO : Stage: mount Jan 30 13:22:15.404677 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:15.404677 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:15.404677 ignition[1088]: INFO : mount: mount passed Jan 30 13:22:15.404677 ignition[1088]: INFO : POST message to Packet Timeline Jan 30 13:22:15.404677 ignition[1088]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:15.385818 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:22:15.787140 coreos-metadata[986]: Jan 30 13:22:15.787 INFO Fetch successful Jan 30 13:22:15.859269 coreos-metadata[985]: Jan 30 13:22:15.859 INFO Fetch successful Jan 30 13:22:15.866606 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 13:22:15.866670 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 13:22:15.900575 coreos-metadata[985]: Jan 30 13:22:15.887 INFO wrote hostname ci-4186.1.0-a-9d6a1ac7ae to /sysroot/etc/hostname Jan 30 13:22:15.888819 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:16.107052 ignition[1088]: INFO : GET result: OK Jan 30 13:22:16.666917 ignition[1088]: INFO : Ignition finished successfully Jan 30 13:22:16.669999 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:22:16.704685 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:22:16.715072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:16.776329 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by mount (1112) Jan 30 13:22:16.776357 kernel: BTRFS info (device sdb6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:16.784411 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:16.790296 kernel: BTRFS info (device sdb6): using free space tree Jan 30 13:22:16.805202 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 13:22:16.805218 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 13:22:16.807866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:16.833380 ignition[1129]: INFO : Ignition 2.20.0 Jan 30 13:22:16.833380 ignition[1129]: INFO : Stage: files Jan 30 13:22:16.847694 ignition[1129]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:16.847694 ignition[1129]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:16.847694 ignition[1129]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:22:16.847694 ignition[1129]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:22:16.837318 unknown[1129]: wrote ssh authorized keys file for user: core Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:17.227816 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:22:17.402923 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:22:17.899394 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:17.899394 ignition[1129]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: files passed Jan 30 13:22:17.929789 ignition[1129]: INFO : POST message to Packet Timeline Jan 30 13:22:17.929789 ignition[1129]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:18.470992 ignition[1129]: INFO : GET result: OK Jan 30 13:22:18.819872 ignition[1129]: INFO : Ignition finished successfully Jan 30 13:22:18.822325 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:22:18.850746 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:22:18.862134 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:22:18.883844 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:22:18.883913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:22:18.907163 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:18.956727 initrd-setup-root-after-ignition[1168]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:18.956727 initrd-setup-root-after-ignition[1168]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:18.927188 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:22:19.004811 initrd-setup-root-after-ignition[1172]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:18.963056 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:22:19.040064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:22:19.040120 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:22:19.058921 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:22:19.079683 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:22:19.096846 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:22:19.106836 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:22:19.185835 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:19.215029 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:22:19.245250 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:19.257109 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:19.278207 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:22:19.297260 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:22:19.297683 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:19.325206 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:22:19.347100 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:22:19.366222 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:22:19.385194 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:19.406085 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:19.427113 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:22:19.447097 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:19.469146 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:22:19.491129 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:22:19.511096 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:22:19.530102 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:22:19.530531 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:19.556305 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:19.577124 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:19.598985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:22:19.599437 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:19.622990 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:22:19.623388 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:19.655088 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:22:19.655553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:19.675287 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:22:19.693973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:22:19.694435 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:19.704375 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:22:19.732204 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:22:19.740328 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:22:19.740651 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:19.767225 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:22:19.767542 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:19.790307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:22:19.790733 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:19.809290 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:22:19.925669 ignition[1193]: INFO : Ignition 2.20.0 Jan 30 13:22:19.925669 ignition[1193]: INFO : Stage: umount Jan 30 13:22:19.925669 ignition[1193]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:19.925669 ignition[1193]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:19.925669 ignition[1193]: INFO : umount: umount passed Jan 30 13:22:19.925669 ignition[1193]: INFO : POST message to Packet Timeline Jan 30 13:22:19.925669 ignition[1193]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:19.809697 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:22:19.827289 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:22:19.827701 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:19.856624 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:22:19.887730 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:22:19.888756 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:22:19.888869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:19.917791 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:22:19.917887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:19.963691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:22:19.968580 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:22:19.968900 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:22:20.043159 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:22:20.043279 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:22:20.512425 ignition[1193]: INFO : GET result: OK Jan 30 13:22:20.845560 ignition[1193]: INFO : Ignition finished successfully Jan 30 13:22:20.848725 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:22:20.849010 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:22:20.864850 systemd[1]: Stopped target network.target - Network. Jan 30 13:22:20.879735 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:22:20.879997 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:22:20.897920 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:22:20.898057 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:22:20.915994 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:22:20.916154 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:22:20.933994 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:22:20.934159 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:20.951988 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:22:20.952157 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:20.960566 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:22:20.974665 systemd-networkd[911]: enp1s0f1np1: DHCPv6 lease lost Jan 30 13:22:20.985709 systemd-networkd[911]: enp1s0f0np0: DHCPv6 lease lost Jan 30 13:22:20.988128 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:22:21.007689 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:22:21.007963 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:22:21.028146 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:22:21.028591 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:22:21.049131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:22:21.049251 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:21.083700 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:22:21.106679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:22:21.106752 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:21.126783 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:22:21.126873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:21.144883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:22:21.145049 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:21.164875 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:22:21.165039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:21.185105 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:21.206786 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:22:21.207157 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:21.237653 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:22:21.237797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:21.244975 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:22:21.245081 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:21.272797 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:22:21.272958 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:21.303059 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:22:21.303228 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:21.342688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:21.342853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:21.390600 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:22:21.422546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:22:21.422586 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:21.444644 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:22:21.659645 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Jan 30 13:22:21.444710 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:21.465772 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:22:21.465916 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:21.486791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:21.486930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:21.510117 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:22:21.510378 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:22:21.529641 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:22:21.529892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:22:21.549835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:22:21.584767 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:22:21.603715 systemd[1]: Switching root. Jan 30 13:22:21.772618 systemd-journald[266]: Journal stopped Jan 30 13:22:02.468604 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 30 13:22:02.468618 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:22:02.468625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:22:02.468631 kernel: BIOS-provided physical RAM map: Jan 30 13:22:02.468635 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 13:22:02.468639 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 13:22:02.468644 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 13:22:02.468648 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 13:22:02.468652 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 13:22:02.468656 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Jan 30 13:22:02.468661 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Jan 30 13:22:02.468665 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Jan 30 13:22:02.468670 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Jan 30 13:22:02.468674 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 13:22:02.468680 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 13:22:02.468684 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 13:22:02.468690 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 13:22:02.468695 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 13:22:02.468699 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 13:22:02.468704 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:22:02.468709 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 13:22:02.468713 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 13:22:02.468718 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:22:02.468723 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 13:22:02.468727 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 13:22:02.468732 kernel: NX (Execute Disable) protection: active Jan 30 13:22:02.468737 kernel: APIC: Static calls initialized Jan 30 13:22:02.468741 kernel: SMBIOS 3.2.1 present. Jan 30 13:22:02.468747 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 13:22:02.468752 kernel: tsc: Detected 3400.000 MHz processor Jan 30 13:22:02.468756 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 13:22:02.468761 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:22:02.468766 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:22:02.468771 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 13:22:02.468776 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 13:22:02.468781 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:22:02.468786 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 13:22:02.468791 kernel: Using GB pages for direct mapping Jan 30 13:22:02.468797 kernel: ACPI: Early table checksum verification disabled Jan 30 13:22:02.468802 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 13:22:02.468809 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 13:22:02.468814 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 13:22:02.468819 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 13:22:02.468824 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 13:22:02.468830 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 13:22:02.468835 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 13:22:02.468840 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 13:22:02.468845 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 13:22:02.468851 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 13:22:02.468856 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 13:22:02.468861 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 13:22:02.468866 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 13:22:02.468872 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468877 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 13:22:02.468882 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 13:22:02.468887 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468892 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468897 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 13:22:02.468903 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 13:22:02.468908 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468914 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:22:02.468919 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 13:22:02.468924 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:22:02.468929 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 13:22:02.468934 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 13:22:02.468939 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 13:22:02.468944 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 13:22:02.468949 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 13:22:02.468955 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 13:22:02.468961 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 13:22:02.468966 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 13:22:02.468971 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 13:22:02.468976 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 13:22:02.468981 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 13:22:02.468986 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 13:22:02.468991 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 13:22:02.468996 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 13:22:02.469002 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 13:22:02.469007 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 13:22:02.469012 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 13:22:02.469017 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 13:22:02.469022 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 13:22:02.469027 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 13:22:02.469032 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 13:22:02.469037 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 13:22:02.469042 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 13:22:02.469047 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 13:22:02.469053 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 13:22:02.469059 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 13:22:02.469064 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 13:22:02.469069 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 13:22:02.469074 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 13:22:02.469079 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 13:22:02.469084 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 13:22:02.469089 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 13:22:02.469094 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 13:22:02.469100 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 13:22:02.469105 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 13:22:02.469110 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 13:22:02.469115 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 13:22:02.469120 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 13:22:02.469125 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 13:22:02.469130 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 13:22:02.469135 kernel: No NUMA configuration found Jan 30 13:22:02.469140 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 13:22:02.469146 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 13:22:02.469151 kernel: Zone ranges: Jan 30 13:22:02.469156 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:22:02.469161 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:22:02.469166 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:22:02.469171 kernel: Movable zone start for each node Jan 30 13:22:02.469176 kernel: Early memory node ranges Jan 30 13:22:02.469181 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 13:22:02.469186 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 13:22:02.469191 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Jan 30 13:22:02.469197 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Jan 30 13:22:02.469202 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 13:22:02.469208 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 13:22:02.469216 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:22:02.469222 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 13:22:02.469228 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:22:02.469233 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 13:22:02.469240 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 13:22:02.469245 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 13:22:02.469250 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 13:22:02.469256 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 13:22:02.469261 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 13:22:02.469267 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 13:22:02.469272 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 13:22:02.469277 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:22:02.469283 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:22:02.469288 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:22:02.469294 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:22:02.469300 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:22:02.469305 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:22:02.469311 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:22:02.469316 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:22:02.469321 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:22:02.469326 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:22:02.469332 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:22:02.469337 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:22:02.469343 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:22:02.469349 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:22:02.469354 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:22:02.469359 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:22:02.469365 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 13:22:02.469370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:22:02.469375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:22:02.469381 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:22:02.469386 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:22:02.469393 kernel: TSC deadline timer available Jan 30 13:22:02.469398 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 13:22:02.469404 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 13:22:02.469409 kernel: Booting paravirtualized kernel on bare hardware Jan 30 13:22:02.469415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:22:02.469420 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 13:22:02.469426 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:22:02.469431 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:22:02.469437 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 13:22:02.469444 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:22:02.469450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:22:02.469455 kernel: random: crng init done Jan 30 13:22:02.469460 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 13:22:02.469466 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 13:22:02.469471 kernel: Fallback order for Node 0: 0 Jan 30 13:22:02.469479 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 13:22:02.469485 kernel: Policy zone: Normal Jan 30 13:22:02.469491 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:22:02.469496 kernel: software IO TLB: area num 16. Jan 30 13:22:02.469502 kernel: Memory: 32718264K/33452980K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734456K reserved, 0K cma-reserved) Jan 30 13:22:02.469508 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 13:22:02.469513 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:22:02.469518 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:22:02.469524 kernel: Dynamic Preempt: voluntary Jan 30 13:22:02.469529 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:22:02.469535 kernel: rcu: RCU event tracing is enabled. Jan 30 13:22:02.469542 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 13:22:02.469547 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:22:02.469553 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:22:02.469558 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:22:02.469564 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:22:02.469569 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 13:22:02.469574 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 13:22:02.469580 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:22:02.469585 kernel: Console: colour VGA+ 80x25 Jan 30 13:22:02.469591 kernel: printk: console [tty0] enabled Jan 30 13:22:02.469597 kernel: printk: console [ttyS1] enabled Jan 30 13:22:02.469602 kernel: ACPI: Core revision 20230628 Jan 30 13:22:02.469608 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 13:22:02.469613 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:22:02.469618 kernel: DMAR: Host address width 39 Jan 30 13:22:02.469624 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 13:22:02.469629 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 13:22:02.469635 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 13:22:02.469640 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 13:22:02.469646 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 13:22:02.469652 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 13:22:02.469657 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 13:22:02.469662 kernel: x2apic enabled Jan 30 13:22:02.469668 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 13:22:02.469673 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 13:22:02.469679 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 13:22:02.469684 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 13:22:02.469691 kernel: process: using mwait in idle threads Jan 30 13:22:02.469696 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:22:02.469701 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:22:02.469707 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:22:02.469712 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:22:02.469717 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:22:02.469723 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:22:02.469728 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:22:02.469733 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:22:02.469739 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:22:02.469744 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:22:02.469750 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:22:02.469756 kernel: TAA: Mitigation: TSX disabled Jan 30 13:22:02.469761 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 13:22:02.469766 kernel: SRBDS: Mitigation: Microcode Jan 30 13:22:02.469772 kernel: GDS: Mitigation: Microcode Jan 30 13:22:02.469777 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:22:02.469782 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:22:02.469788 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:22:02.469793 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:22:02.469798 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:22:02.469804 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:22:02.469810 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:22:02.469815 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:22:02.469821 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 13:22:02.469826 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:22:02.469831 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:22:02.469837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:22:02.469842 kernel: landlock: Up and running. Jan 30 13:22:02.469848 kernel: SELinux: Initializing. Jan 30 13:22:02.469853 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.469858 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.469864 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:22:02.469869 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:22:02.469876 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:22:02.469881 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:22:02.469887 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 13:22:02.469892 kernel: ... version: 4 Jan 30 13:22:02.469897 kernel: ... bit width: 48 Jan 30 13:22:02.469903 kernel: ... generic registers: 4 Jan 30 13:22:02.469908 kernel: ... value mask: 0000ffffffffffff Jan 30 13:22:02.469914 kernel: ... max period: 00007fffffffffff Jan 30 13:22:02.469919 kernel: ... fixed-purpose events: 3 Jan 30 13:22:02.469925 kernel: ... event mask: 000000070000000f Jan 30 13:22:02.469931 kernel: signal: max sigframe size: 2032 Jan 30 13:22:02.469936 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 13:22:02.469942 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:22:02.469947 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:22:02.469952 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 13:22:02.469958 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:22:02.469963 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:22:02.469969 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 13:22:02.469975 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:22:02.469981 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 13:22:02.469986 kernel: smpboot: Max logical packages: 1 Jan 30 13:22:02.469992 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 13:22:02.469997 kernel: devtmpfs: initialized Jan 30 13:22:02.470002 kernel: x86/mm: Memory block size: 128MB Jan 30 13:22:02.470008 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Jan 30 13:22:02.470013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 13:22:02.470020 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:22:02.470025 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 13:22:02.470030 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:22:02.470036 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:22:02.470041 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:22:02.470046 kernel: audit: type=2000 audit(1738243317.042:1): state=initialized audit_enabled=0 res=1 Jan 30 13:22:02.470052 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:22:02.470057 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:22:02.470063 kernel: cpuidle: using governor menu Jan 30 13:22:02.470069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:22:02.470075 kernel: dca service started, version 1.12.1 Jan 30 13:22:02.470080 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:22:02.470086 kernel: PCI: Using configuration type 1 for base access Jan 30 13:22:02.470091 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 13:22:02.470096 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:22:02.470102 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:22:02.470107 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:22:02.470113 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:22:02.470119 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:22:02.470125 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:22:02.470130 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:22:02.470135 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:22:02.470141 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:22:02.470146 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 13:22:02.470151 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470157 kernel: ACPI: SSDT 0xFFFF9CEEC1602800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 13:22:02.470162 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470169 kernel: ACPI: SSDT 0xFFFF9CEEC15FB800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 13:22:02.470174 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470180 kernel: ACPI: SSDT 0xFFFF9CEEC15E5600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 13:22:02.470185 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470190 kernel: ACPI: SSDT 0xFFFF9CEEC15FB000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 13:22:02.470196 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470201 kernel: ACPI: SSDT 0xFFFF9CEEC160E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 13:22:02.470206 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:22:02.470212 kernel: ACPI: SSDT 0xFFFF9CEEC0EE1800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 13:22:02.470217 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 13:22:02.470223 kernel: ACPI: Interpreter enabled Jan 30 13:22:02.470229 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:22:02.470234 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:22:02.470240 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 13:22:02.470245 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 13:22:02.470250 kernel: HEST: Table parsing has been initialized. Jan 30 13:22:02.470256 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 13:22:02.470261 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:22:02.470267 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:22:02.470273 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 13:22:02.470279 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 13:22:02.470284 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 13:22:02.470290 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 13:22:02.470295 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 13:22:02.470301 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 13:22:02.470306 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 13:22:02.470312 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 13:22:02.470317 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 13:22:02.470324 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 13:22:02.470329 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 13:22:02.470335 kernel: ACPI: \PIN_: New power resource Jan 30 13:22:02.470340 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 13:22:02.470411 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:22:02.470464 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 13:22:02.470513 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 13:22:02.470524 kernel: PCI host bridge to bus 0000:00 Jan 30 13:22:02.470572 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:22:02.470615 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:22:02.470656 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:22:02.470697 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 13:22:02.470738 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 13:22:02.470778 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 13:22:02.470838 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 13:22:02.470893 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 13:22:02.470943 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.470997 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 13:22:02.471045 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 13:22:02.471096 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 13:22:02.471146 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 13:22:02.471199 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 13:22:02.471246 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 13:22:02.471294 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 13:22:02.471345 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 13:22:02.471393 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 13:22:02.471441 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 13:22:02.471495 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 13:22:02.471542 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:22:02.471596 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 13:22:02.471643 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:22:02.471694 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 13:22:02.471743 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 13:22:02.471793 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 13:22:02.471849 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 13:22:02.471899 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 13:22:02.471946 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 13:22:02.471997 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 13:22:02.472045 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 13:22:02.472094 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 13:22:02.472145 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 13:22:02.472193 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 13:22:02.472240 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 13:22:02.472286 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 13:22:02.472333 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 13:22:02.472379 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 13:22:02.472429 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 13:22:02.472475 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 13:22:02.472572 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 13:22:02.472620 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472678 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 13:22:02.472729 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472781 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 13:22:02.472830 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472881 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 13:22:02.472929 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.472982 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 13:22:02.473031 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.473081 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 13:22:02.473129 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:22:02.473179 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 13:22:02.473231 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 13:22:02.473280 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 13:22:02.473328 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 13:22:02.473382 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 13:22:02.473430 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 13:22:02.473487 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 13:22:02.473571 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 13:22:02.473620 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 13:22:02.473670 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 13:22:02.473720 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:22:02.473769 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:22:02.473822 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 13:22:02.473871 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 13:22:02.473922 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 13:22:02.473970 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 13:22:02.474020 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:22:02.474069 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:22:02.474117 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:22:02.474165 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:22:02.474212 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:22:02.474260 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:22:02.474313 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 13:22:02.474365 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:22:02.474413 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 13:22:02.474461 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 13:22:02.474513 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 13:22:02.474562 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.474610 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:22:02.474656 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:22:02.474706 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:22:02.474758 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 13:22:02.474809 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:22:02.474858 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 13:22:02.474907 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 13:22:02.474956 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 13:22:02.475004 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:22:02.475052 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:22:02.475101 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:22:02.475149 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:22:02.475196 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:22:02.475251 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 13:22:02.475300 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 13:22:02.475349 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 13:22:02.475397 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:22:02.475448 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:22:02.475551 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.475598 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.475652 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 13:22:02.475707 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 13:22:02.475759 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 13:22:02.475809 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 13:22:02.475863 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 13:22:02.475913 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:22:02.475964 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 13:22:02.476015 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:22:02.476064 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:22:02.476113 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.476160 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.476168 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 13:22:02.476176 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 13:22:02.476182 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 13:22:02.476188 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 13:22:02.476193 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 13:22:02.476199 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 13:22:02.476205 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 13:22:02.476210 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 13:22:02.476216 kernel: iommu: Default domain type: Translated Jan 30 13:22:02.476222 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:22:02.476229 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:22:02.476234 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:22:02.476240 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 13:22:02.476246 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Jan 30 13:22:02.476251 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 13:22:02.476257 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 13:22:02.476262 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 13:22:02.476268 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 13:22:02.476317 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 13:22:02.476371 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 13:22:02.476420 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:22:02.476428 kernel: vgaarb: loaded Jan 30 13:22:02.476436 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:22:02.476442 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:22:02.476447 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:22:02.476453 kernel: pnp: PnP ACPI init Jan 30 13:22:02.476525 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 13:22:02.476588 kernel: pnp 00:02: [dma 0 disabled] Jan 30 13:22:02.476638 kernel: pnp 00:03: [dma 0 disabled] Jan 30 13:22:02.476686 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 13:22:02.476731 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 13:22:02.476777 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 13:22:02.476823 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 13:22:02.476870 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 13:22:02.476913 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 13:22:02.476956 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 13:22:02.477001 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 13:22:02.477045 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 13:22:02.477088 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 13:22:02.477132 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 13:22:02.477181 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 13:22:02.477226 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 13:22:02.477268 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 13:22:02.477311 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 13:22:02.477354 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 13:22:02.477397 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 13:22:02.477440 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 13:22:02.477562 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 13:22:02.477571 kernel: pnp: PnP ACPI: found 10 devices Jan 30 13:22:02.477577 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:22:02.477583 kernel: NET: Registered PF_INET protocol family Jan 30 13:22:02.477589 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:02.477595 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 13:22:02.477600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:22:02.477606 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:02.477614 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:22:02.477620 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 13:22:02.477626 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.477632 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:22:02.477637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:22:02.477643 kernel: NET: Registered PF_XDP protocol family Jan 30 13:22:02.477692 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 13:22:02.477740 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 13:22:02.477788 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 13:22:02.477840 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:22:02.477889 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:22:02.477939 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:22:02.477988 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:22:02.478037 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:22:02.478084 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:22:02.478131 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:22:02.478179 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:22:02.478229 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:22:02.478276 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:22:02.478324 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:22:02.478371 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:22:02.478422 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:22:02.478468 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:22:02.478554 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:22:02.478603 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:22:02.478652 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.478700 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.478747 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:22:02.478796 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:22:02.478843 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:22:02.478889 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 13:22:02.478931 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:22:02.478974 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:22:02.479015 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:22:02.479060 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 13:22:02.479101 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 13:22:02.479149 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 13:22:02.479192 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:22:02.479246 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 13:22:02.479289 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 13:22:02.479337 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 13:22:02.479380 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 13:22:02.479428 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 13:22:02.479472 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:22:02.479523 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 13:22:02.479569 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:22:02.479578 kernel: PCI: CLS 64 bytes, default 64 Jan 30 13:22:02.479584 kernel: DMAR: No ATSR found Jan 30 13:22:02.479589 kernel: DMAR: No SATC found Jan 30 13:22:02.479595 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 13:22:02.479642 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 13:22:02.479690 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 13:22:02.479740 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 13:22:02.479789 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 13:22:02.479835 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 13:22:02.479882 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 13:22:02.479928 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 13:22:02.479975 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 13:22:02.480022 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 13:22:02.480070 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 13:22:02.480119 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 13:22:02.480165 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 13:22:02.480212 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 13:22:02.480258 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 13:22:02.480305 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 13:22:02.480351 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 13:22:02.480397 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 13:22:02.480443 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 13:22:02.480495 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 13:22:02.480542 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 13:22:02.480589 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 13:22:02.480636 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 13:22:02.480685 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 13:22:02.480734 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 13:22:02.480783 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 13:22:02.480832 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 13:22:02.480884 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 13:22:02.480892 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 13:22:02.480898 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:22:02.480904 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 13:22:02.480910 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 13:22:02.480916 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 13:22:02.480922 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 13:22:02.480927 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 13:22:02.480977 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 13:22:02.480988 kernel: Initialise system trusted keyrings Jan 30 13:22:02.480994 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 13:22:02.480999 kernel: Key type asymmetric registered Jan 30 13:22:02.481005 kernel: Asymmetric key parser 'x509' registered Jan 30 13:22:02.481011 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:22:02.481016 kernel: io scheduler mq-deadline registered Jan 30 13:22:02.481022 kernel: io scheduler kyber registered Jan 30 13:22:02.481028 kernel: io scheduler bfq registered Jan 30 13:22:02.481074 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 13:22:02.481124 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 13:22:02.481172 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 13:22:02.481219 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 13:22:02.481265 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 13:22:02.481312 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 13:22:02.481364 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 13:22:02.481373 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 13:22:02.481380 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 13:22:02.481386 kernel: pstore: Using crash dump compression: deflate Jan 30 13:22:02.481392 kernel: pstore: Registered erst as persistent store backend Jan 30 13:22:02.481398 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:22:02.481403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:22:02.481409 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:22:02.481415 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:22:02.481421 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 13:22:02.481470 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 13:22:02.481485 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:22:02.481559 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 13:22:02.481604 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 13:22:02.481647 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T13:22:01 UTC (1738243321) Jan 30 13:22:02.481691 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:22:02.481699 kernel: intel_pstate: Intel P-state driver initializing Jan 30 13:22:02.481705 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 13:22:02.481713 kernel: intel_pstate: HWP enabled Jan 30 13:22:02.481719 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:22:02.481724 kernel: Segment Routing with IPv6 Jan 30 13:22:02.481730 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:22:02.481736 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:22:02.481742 kernel: Key type dns_resolver registered Jan 30 13:22:02.481747 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 13:22:02.481753 kernel: IPI shorthand broadcast: enabled Jan 30 13:22:02.481759 kernel: sched_clock: Marking stable (2490262503, 1448893929)->(4502124522, -562968090) Jan 30 13:22:02.481766 kernel: registered taskstats version 1 Jan 30 13:22:02.481772 kernel: Loading compiled-in X.509 certificates Jan 30 13:22:02.481777 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:22:02.481783 kernel: Key type .fscrypt registered Jan 30 13:22:02.481788 kernel: Key type fscrypt-provisioning registered Jan 30 13:22:02.481794 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:22:02.481800 kernel: ima: No architecture policies found Jan 30 13:22:02.481806 kernel: clk: Disabling unused clocks Jan 30 13:22:02.481812 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:22:02.481818 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:22:02.481824 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:22:02.481830 kernel: Run /init as init process Jan 30 13:22:02.481836 kernel: with arguments: Jan 30 13:22:02.481841 kernel: /init Jan 30 13:22:02.481847 kernel: with environment: Jan 30 13:22:02.481852 kernel: HOME=/ Jan 30 13:22:02.481858 kernel: TERM=linux Jan 30 13:22:02.481864 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:22:02.481872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:02.481879 systemd[1]: Detected architecture x86-64. Jan 30 13:22:02.481885 systemd[1]: Running in initrd. Jan 30 13:22:02.481891 systemd[1]: No hostname configured, using default hostname. Jan 30 13:22:02.481896 systemd[1]: Hostname set to . Jan 30 13:22:02.481902 systemd[1]: Initializing machine ID from random generator. Jan 30 13:22:02.481908 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:22:02.481915 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:02.481921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:02.481928 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:22:02.481934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:02.481940 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:22:02.481947 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:22:02.481953 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:22:02.481961 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:22:02.481967 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:02.481973 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:02.481979 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:02.481985 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:02.481991 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:02.481997 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:02.482003 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:02.482010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:02.482016 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:22:02.482022 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:22:02.482029 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:02.482035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:02.482041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:02.482047 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:02.482053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:22:02.482059 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 30 13:22:02.482065 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 30 13:22:02.482071 kernel: clocksource: Switched to clocksource tsc Jan 30 13:22:02.482077 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:02.482083 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:22:02.482089 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:22:02.482095 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:02.482111 systemd-journald[266]: Collecting audit messages is disabled. Jan 30 13:22:02.482127 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:02.482134 systemd-journald[266]: Journal started Jan 30 13:22:02.482149 systemd-journald[266]: Runtime Journal (/run/log/journal/b0a96cb629844946b8064d7abe21010f) is 8.0M, max 639.9M, 631.9M free. Jan 30 13:22:02.485445 systemd-modules-load[267]: Inserted module 'overlay' Jan 30 13:22:02.502580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:02.525503 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:22:02.525520 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:02.532678 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:02.532770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:02.532944 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:22:02.539468 systemd-modules-load[267]: Inserted module 'br_netfilter' Jan 30 13:22:02.539554 kernel: Bridge firewalling registered Jan 30 13:22:02.543752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:02.559963 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:02.620146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:02.640593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:02.669873 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:02.691792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:02.726803 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:02.747817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:02.751569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:02.766952 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:02.767847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:02.769978 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:02.781718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:02.786550 systemd-resolved[300]: Positive Trust Anchors: Jan 30 13:22:02.786555 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:02.786581 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:02.788313 systemd-resolved[300]: Defaulting to hostname 'linux'. Jan 30 13:22:02.793776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:02.814717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:02.830704 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:22:02.954790 dracut-cmdline[312]: dracut-dracut-053 Jan 30 13:22:02.961717 dracut-cmdline[312]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:22:03.139502 kernel: SCSI subsystem initialized Jan 30 13:22:03.151511 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:22:03.164529 kernel: iscsi: registered transport (tcp) Jan 30 13:22:03.185183 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:22:03.185200 kernel: QLogic iSCSI HBA Driver Jan 30 13:22:03.208096 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:03.231734 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:22:03.270213 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:22:03.270251 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:22:03.279488 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:22:03.317542 kernel: raid6: avx2x4 gen() 42911 MB/s Jan 30 13:22:03.338548 kernel: raid6: avx2x2 gen() 52675 MB/s Jan 30 13:22:03.364630 kernel: raid6: avx2x1 gen() 45106 MB/s Jan 30 13:22:03.364646 kernel: raid6: using algorithm avx2x2 gen() 52675 MB/s Jan 30 13:22:03.391709 kernel: raid6: .... xor() 32492 MB/s, rmw enabled Jan 30 13:22:03.391726 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:22:03.412483 kernel: xor: automatically using best checksumming function avx Jan 30 13:22:03.509523 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:22:03.515687 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:03.539790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:03.547182 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jan 30 13:22:03.550599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:03.577619 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:22:03.628643 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 30 13:22:03.645803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:03.666765 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:03.753376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:03.780031 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:22:03.780072 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:22:03.780083 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:22:03.791485 kernel: libata version 3.00 loaded. Jan 30 13:22:03.798487 kernel: PTP clock support registered Jan 30 13:22:03.798523 kernel: ACPI: bus type USB registered Jan 30 13:22:03.809023 kernel: usbcore: registered new interface driver usbfs Jan 30 13:22:03.814513 kernel: usbcore: registered new interface driver hub Jan 30 13:22:03.814574 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:22:03.814614 kernel: usbcore: registered new device driver usb Jan 30 13:22:03.825482 kernel: AES CTR mode by8 optimization enabled Jan 30 13:22:03.831489 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 13:22:04.066628 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 13:22:04.066713 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 13:22:04.066781 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 13:22:04.066791 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 13:22:04.066798 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:22:04.066864 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 13:22:04.066927 kernel: scsi host0: ahci Jan 30 13:22:04.066992 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 13:22:04.067058 kernel: scsi host1: ahci Jan 30 13:22:04.067122 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:22:04.067185 kernel: scsi host2: ahci Jan 30 13:22:04.067245 kernel: pps pps0: new PPS source ptp0 Jan 30 13:22:04.067306 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 13:22:04.067373 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:22:04.067438 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:54 Jan 30 13:22:04.067512 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 13:22:04.067578 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:22:04.067642 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 13:22:04.067704 kernel: scsi host3: ahci Jan 30 13:22:04.067765 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 13:22:04.067827 kernel: scsi host4: ahci Jan 30 13:22:04.067886 kernel: hub 1-0:1.0: USB hub found Jan 30 13:22:04.067957 kernel: scsi host5: ahci Jan 30 13:22:04.068017 kernel: pps pps1: new PPS source ptp1 Jan 30 13:22:04.068076 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 13:22:04.068142 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:22:04.068205 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:55 Jan 30 13:22:04.068268 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 13:22:04.068330 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:22:04.068395 kernel: hub 1-0:1.0: 16 ports detected Jan 30 13:22:04.068456 kernel: scsi host6: ahci Jan 30 13:22:04.068521 kernel: hub 2-0:1.0: USB hub found Jan 30 13:22:04.068585 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 13:22:04.068593 kernel: hub 2-0:1.0: 10 ports detected Jan 30 13:22:04.068652 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 13:22:04.068660 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 13:22:04.068725 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 13:22:04.068734 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 13:22:04.068741 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 13:22:04.068749 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 13:22:04.068756 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 13:22:03.838769 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:22:03.874761 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:04.146620 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 30 13:22:04.570111 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:22:04.570196 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 13:22:04.570270 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 13:22:04.786846 kernel: hub 1-14:1.0: USB hub found Jan 30 13:22:04.786942 kernel: hub 1-14:1.0: 4 ports detected Jan 30 13:22:04.787019 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:22:04.787093 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787102 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 13:22:04.787168 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787176 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787184 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787192 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:22:04.787199 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:22:04.787206 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:22:04.787216 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:22:04.787223 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 13:22:04.787231 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:22:04.787238 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:22:04.787246 kernel: ata2.00: Features: NCQ-prio Jan 30 13:22:04.787253 kernel: ata1.00: Features: NCQ-prio Jan 30 13:22:04.787261 kernel: ata2.00: configured for UDMA/133 Jan 30 13:22:04.787268 kernel: ata1.00: configured for UDMA/133 Jan 30 13:22:04.787275 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:22:04.787348 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 13:22:04.787413 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:22:04.787421 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:04.787429 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:22:04.787494 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:22:04.787556 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:22:04.787619 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 30 13:22:04.787684 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 13:22:04.787746 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 30 13:22:04.787808 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 13:22:04.787868 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:22:04.787928 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:22:04.787995 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 30 13:22:05.081818 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:22:05.082212 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 13:22:05.082565 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 13:22:05.082873 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:22:05.083199 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:22:05.083268 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 13:22:05.083660 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 13:22:05.084046 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:05.084113 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:22:05.084193 kernel: GPT:9289727 != 937703087 Jan 30 13:22:05.084256 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:22:05.084315 kernel: GPT:9289727 != 937703087 Jan 30 13:22:05.084375 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:22:05.084435 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 13:22:05.084513 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 30 13:22:05.085001 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 13:22:05.085791 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by (udev-worker) (550) Jan 30 13:22:05.085881 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sdb3 scanned by (udev-worker) (553) Jan 30 13:22:05.085950 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:22:05.086015 kernel: usbcore: registered new interface driver usbhid Jan 30 13:22:05.086077 kernel: usbhid: USB HID core driver Jan 30 13:22:05.086139 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 13:22:05.086199 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 13:22:05.086621 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:05.086669 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 13:22:05.087026 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 13:22:05.087085 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 13:22:05.087496 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 13:22:05.087550 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 13:22:05.087924 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:22:05.088283 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 13:22:04.004059 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:05.110739 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 13:22:04.136669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:04.157637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:04.167606 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:04.167677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:04.178612 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:05.166742 disk-uuid[703]: Primary Header is updated. Jan 30 13:22:05.166742 disk-uuid[703]: Secondary Entries is updated. Jan 30 13:22:05.166742 disk-uuid[703]: Secondary Header is updated. Jan 30 13:22:04.195634 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:22:04.211582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:04.211653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:04.222585 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:04.239630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:04.249831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:04.272242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:04.301628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:04.310687 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:04.726817 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 30 13:22:04.756380 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 30 13:22:04.774476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 13:22:04.790671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:22:04.827625 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 13:22:04.855590 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:22:05.885592 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:22:05.893378 disk-uuid[704]: The operation has completed successfully. Jan 30 13:22:05.901654 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 13:22:05.927922 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:22:05.928020 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:22:05.973785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:22:05.999610 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:22:05.999668 sh[728]: Success Jan 30 13:22:06.032803 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:22:06.050618 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:22:06.061894 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:22:06.117864 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:22:06.117886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:06.127493 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:22:06.134555 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:22:06.140384 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:22:06.154529 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:22:06.156869 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:22:06.166864 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:22:06.180969 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:22:06.193780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:22:06.229312 kernel: BTRFS info (device sdb6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:06.229331 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:06.236340 kernel: BTRFS info (device sdb6): using free space tree Jan 30 13:22:06.254117 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 13:22:06.254133 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 13:22:06.259735 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:22:06.282750 kernel: BTRFS info (device sdb6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:06.273182 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:22:06.293253 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:22:06.341797 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:06.357725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:06.374600 systemd-networkd[911]: lo: Link UP Jan 30 13:22:06.374604 systemd-networkd[911]: lo: Gained carrier Jan 30 13:22:06.376996 systemd-networkd[911]: Enumeration completed Jan 30 13:22:06.389019 ignition[805]: Ignition 2.20.0 Jan 30 13:22:06.377599 systemd-networkd[911]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.389024 ignition[805]: Stage: fetch-offline Jan 30 13:22:06.380605 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:06.389045 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:06.391286 unknown[805]: fetched base config from "system" Jan 30 13:22:06.389050 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:06.391293 unknown[805]: fetched user config from "system" Jan 30 13:22:06.389103 ignition[805]: parsed url from cmdline: "" Jan 30 13:22:06.404399 systemd-networkd[911]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.389105 ignition[805]: no config URL provided Jan 30 13:22:06.404735 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:06.389107 ignition[805]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:06.423089 systemd[1]: Reached target network.target - Network. Jan 30 13:22:06.389130 ignition[805]: parsing config with SHA512: 81cabcb72845019c0732f687dc04a50b3b531485dec95a7c6f48049a2c8200b44625f237907d33eca8c69a544762cd387f8ec5f38a50a84ef2a5220a1402b4cc Jan 30 13:22:06.432899 systemd-networkd[911]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.391515 ignition[805]: fetch-offline: fetch-offline passed Jan 30 13:22:06.437615 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:22:06.391519 ignition[805]: POST message to Packet Timeline Jan 30 13:22:06.606653 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 13:22:06.450711 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:22:06.391523 ignition[805]: POST Status error: resource requires networking Jan 30 13:22:06.600998 systemd-networkd[911]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:06.391575 ignition[805]: Ignition finished successfully Jan 30 13:22:06.464568 ignition[923]: Ignition 2.20.0 Jan 30 13:22:06.464573 ignition[923]: Stage: kargs Jan 30 13:22:06.464672 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:06.464678 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:06.465150 ignition[923]: kargs: kargs passed Jan 30 13:22:06.465153 ignition[923]: POST message to Packet Timeline Jan 30 13:22:06.465164 ignition[923]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:06.465507 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59409->[::1]:53: read: connection refused Jan 30 13:22:06.666049 ignition[923]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 13:22:06.667019 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37521->[::1]:53: read: connection refused Jan 30 13:22:06.809513 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 13:22:06.810204 systemd-networkd[911]: eno1: Link UP Jan 30 13:22:06.810364 systemd-networkd[911]: eno2: Link UP Jan 30 13:22:06.810516 systemd-networkd[911]: enp1s0f0np0: Link UP Jan 30 13:22:06.810688 systemd-networkd[911]: enp1s0f0np0: Gained carrier Jan 30 13:22:06.823795 systemd-networkd[911]: enp1s0f1np1: Link UP Jan 30 13:22:06.859746 systemd-networkd[911]: enp1s0f0np0: DHCPv4 address 147.75.90.199/31, gateway 147.75.90.198 acquired from 145.40.83.140 Jan 30 13:22:07.067589 ignition[923]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 13:22:07.068737 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43889->[::1]:53: read: connection refused Jan 30 13:22:07.611285 systemd-networkd[911]: enp1s0f1np1: Gained carrier Jan 30 13:22:07.869741 ignition[923]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 13:22:07.870745 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52925->[::1]:53: read: connection refused Jan 30 13:22:08.571081 systemd-networkd[911]: enp1s0f0np0: Gained IPv6LL Jan 30 13:22:09.211099 systemd-networkd[911]: enp1s0f1np1: Gained IPv6LL Jan 30 13:22:09.471644 ignition[923]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 13:22:09.472666 ignition[923]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49541->[::1]:53: read: connection refused Jan 30 13:22:12.675443 ignition[923]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 13:22:13.122284 ignition[923]: GET result: OK Jan 30 13:22:13.518853 ignition[923]: Ignition finished successfully Jan 30 13:22:13.524742 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:22:13.553739 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:22:13.559823 ignition[943]: Ignition 2.20.0 Jan 30 13:22:13.559828 ignition[943]: Stage: disks Jan 30 13:22:13.559938 ignition[943]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:13.559945 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:13.560470 ignition[943]: disks: disks passed Jan 30 13:22:13.560473 ignition[943]: POST message to Packet Timeline Jan 30 13:22:13.560488 ignition[943]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:14.343261 ignition[943]: GET result: OK Jan 30 13:22:14.695789 ignition[943]: Ignition finished successfully Jan 30 13:22:14.697651 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:22:14.713765 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:14.732920 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:22:14.753909 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:14.775801 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:14.795792 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:14.834721 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:22:14.868842 systemd-fsck[959]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:22:14.878951 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:22:14.912678 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:22:14.984376 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:22:15.001708 kernel: EXT4-fs (sdb9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:22:14.993909 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:15.024679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:15.071521 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sdb6 scanned by mount (968) Jan 30 13:22:15.071535 kernel: BTRFS info (device sdb6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:15.071546 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:15.071553 kernel: BTRFS info (device sdb6): using free space tree Jan 30 13:22:15.033946 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:22:15.101712 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 13:22:15.101725 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 13:22:15.093755 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:22:15.128581 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 13:22:15.139571 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:22:15.139589 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:15.194650 coreos-metadata[985]: Jan 30 13:22:15.159 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:22:15.214671 coreos-metadata[986]: Jan 30 13:22:15.159 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:22:15.159615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:15.184774 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:22:15.213761 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:22:15.264552 initrd-setup-root[1000]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:22:15.274603 initrd-setup-root[1007]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:22:15.284595 initrd-setup-root[1014]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:22:15.295594 initrd-setup-root[1021]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:22:15.302986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:15.325755 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:22:15.350698 kernel: BTRFS info (device sdb6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:15.326375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:22:15.359490 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:22:15.380923 ignition[1088]: INFO : Ignition 2.20.0 Jan 30 13:22:15.380923 ignition[1088]: INFO : Stage: mount Jan 30 13:22:15.404677 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:15.404677 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:15.404677 ignition[1088]: INFO : mount: mount passed Jan 30 13:22:15.404677 ignition[1088]: INFO : POST message to Packet Timeline Jan 30 13:22:15.404677 ignition[1088]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:15.385818 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:22:15.787140 coreos-metadata[986]: Jan 30 13:22:15.787 INFO Fetch successful Jan 30 13:22:15.859269 coreos-metadata[985]: Jan 30 13:22:15.859 INFO Fetch successful Jan 30 13:22:15.866606 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 13:22:15.866670 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 13:22:15.900575 coreos-metadata[985]: Jan 30 13:22:15.887 INFO wrote hostname ci-4186.1.0-a-9d6a1ac7ae to /sysroot/etc/hostname Jan 30 13:22:15.888819 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:16.107052 ignition[1088]: INFO : GET result: OK Jan 30 13:22:16.666917 ignition[1088]: INFO : Ignition finished successfully Jan 30 13:22:16.669999 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:22:16.704685 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:22:16.715072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:16.776329 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by mount (1112) Jan 30 13:22:16.776357 kernel: BTRFS info (device sdb6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:22:16.784411 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:22:16.790296 kernel: BTRFS info (device sdb6): using free space tree Jan 30 13:22:16.805202 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 13:22:16.805218 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 13:22:16.807866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:16.833380 ignition[1129]: INFO : Ignition 2.20.0 Jan 30 13:22:16.833380 ignition[1129]: INFO : Stage: files Jan 30 13:22:16.847694 ignition[1129]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:16.847694 ignition[1129]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:16.847694 ignition[1129]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:22:16.847694 ignition[1129]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:22:16.847694 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:22:16.837318 unknown[1129]: wrote ssh authorized keys file for user: core Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:16.978678 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:17.227816 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:22:17.402923 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:22:17.899394 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:22:17.899394 ignition[1129]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:17.929789 ignition[1129]: INFO : files: files passed Jan 30 13:22:17.929789 ignition[1129]: INFO : POST message to Packet Timeline Jan 30 13:22:17.929789 ignition[1129]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:18.470992 ignition[1129]: INFO : GET result: OK Jan 30 13:22:18.819872 ignition[1129]: INFO : Ignition finished successfully Jan 30 13:22:18.822325 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:22:18.850746 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:22:18.862134 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:22:18.883844 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:22:18.883913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:22:18.907163 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:18.956727 initrd-setup-root-after-ignition[1168]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:18.956727 initrd-setup-root-after-ignition[1168]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:18.927188 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:22:19.004811 initrd-setup-root-after-ignition[1172]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:18.963056 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:22:19.040064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:22:19.040120 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:22:19.058921 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:22:19.079683 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:22:19.096846 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:22:19.106836 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:22:19.185835 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:19.215029 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:22:19.245250 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:19.257109 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:19.278207 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:22:19.297260 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:22:19.297683 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:19.325206 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:22:19.347100 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:22:19.366222 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:22:19.385194 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:19.406085 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:19.427113 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:22:19.447097 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:19.469146 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:22:19.491129 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:22:19.511096 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:22:19.530102 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:22:19.530531 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:19.556305 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:19.577124 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:19.598985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:22:19.599437 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:19.622990 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:22:19.623388 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:19.655088 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:22:19.655553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:19.675287 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:22:19.693973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:22:19.694435 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:19.704375 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:22:19.732204 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:22:19.740328 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:22:19.740651 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:19.767225 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:22:19.767542 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:19.790307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:22:19.790733 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:19.809290 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:22:19.925669 ignition[1193]: INFO : Ignition 2.20.0 Jan 30 13:22:19.925669 ignition[1193]: INFO : Stage: umount Jan 30 13:22:19.925669 ignition[1193]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:19.925669 ignition[1193]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:22:19.925669 ignition[1193]: INFO : umount: umount passed Jan 30 13:22:19.925669 ignition[1193]: INFO : POST message to Packet Timeline Jan 30 13:22:19.925669 ignition[1193]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:22:19.809697 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:22:19.827289 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:22:19.827701 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:19.856624 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:22:19.887730 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:22:19.888756 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:22:19.888869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:19.917791 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:22:19.917887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:19.963691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:22:19.968580 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:22:19.968900 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:22:20.043159 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:22:20.043279 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:22:20.512425 ignition[1193]: INFO : GET result: OK Jan 30 13:22:20.845560 ignition[1193]: INFO : Ignition finished successfully Jan 30 13:22:20.848725 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:22:20.849010 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:22:20.864850 systemd[1]: Stopped target network.target - Network. Jan 30 13:22:20.879735 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:22:20.879997 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:22:20.897920 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:22:20.898057 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:22:20.915994 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:22:20.916154 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:22:20.933994 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:22:20.934159 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:20.951988 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:22:20.952157 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:20.960566 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:22:20.974665 systemd-networkd[911]: enp1s0f1np1: DHCPv6 lease lost Jan 30 13:22:20.985709 systemd-networkd[911]: enp1s0f0np0: DHCPv6 lease lost Jan 30 13:22:20.988128 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:22:21.007689 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:22:21.007963 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:22:21.028146 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:22:21.028591 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:22:21.049131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:22:21.049251 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:21.083700 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:22:21.106679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:22:21.106752 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:21.126783 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:22:21.126873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:21.144883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:22:21.145049 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:21.164875 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:22:21.165039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:21.185105 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:21.206786 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:22:21.207157 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:21.237653 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:22:21.237797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:21.244975 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:22:21.245081 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:21.272797 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:22:21.272958 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:21.303059 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:22:21.303228 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:21.342688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:21.342853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:21.390600 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:22:21.422546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:22:21.422586 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:21.444644 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:22:21.659645 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Jan 30 13:22:21.444710 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:21.465772 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:22:21.465916 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:21.486791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:21.486930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:21.510117 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:22:21.510378 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:22:21.529641 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:22:21.529892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:22:21.549835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:22:21.584767 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:22:21.603715 systemd[1]: Switching root. Jan 30 13:22:21.772618 systemd-journald[266]: Journal stopped Jan 30 13:22:23.414318 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:22:23.414333 kernel: SELinux: policy capability open_perms=1 Jan 30 13:22:23.414341 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:22:23.414347 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:22:23.414354 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:22:23.414359 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:22:23.414366 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:22:23.414371 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:22:23.414377 kernel: audit: type=1403 audit(1738243341.892:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:22:23.414384 systemd[1]: Successfully loaded SELinux policy in 73.367ms. Jan 30 13:22:23.414392 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.906ms. Jan 30 13:22:23.414400 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:23.414406 systemd[1]: Detected architecture x86-64. Jan 30 13:22:23.414412 systemd[1]: Detected first boot. Jan 30 13:22:23.414419 systemd[1]: Hostname set to . Jan 30 13:22:23.414427 systemd[1]: Initializing machine ID from random generator. Jan 30 13:22:23.414433 zram_generator::config[1245]: No configuration found. Jan 30 13:22:23.414440 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:22:23.414447 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:22:23.414453 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:22:23.414460 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:22:23.414466 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:22:23.414474 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:22:23.414522 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:22:23.414530 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:22:23.414537 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:22:23.414543 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:22:23.414550 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:22:23.414557 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:22:23.414565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:23.414572 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:23.414579 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:22:23.414586 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:22:23.414592 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:22:23.414599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:23.414606 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 30 13:22:23.414612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:23.414620 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:22:23.414627 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:22:23.414634 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:23.414642 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:22:23.414649 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:23.414656 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:23.414662 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:23.414669 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:23.414677 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:22:23.414684 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:22:23.414691 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:23.414698 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:23.414705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:23.414713 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:22:23.414721 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:22:23.414727 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:22:23.414734 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:22:23.414742 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:22:23.414749 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:22:23.414755 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:22:23.414762 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:22:23.414771 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:22:23.414778 systemd[1]: Reached target machines.target - Containers. Jan 30 13:22:23.414785 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:22:23.414793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:23.414800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:23.414806 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:22:23.414813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:23.414820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:23.414831 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:23.414838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:22:23.414844 kernel: ACPI: bus type drm_connector registered Jan 30 13:22:23.414851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:23.414858 kernel: fuse: init (API version 7.39) Jan 30 13:22:23.414864 kernel: loop: module loaded Jan 30 13:22:23.414870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:22:23.414878 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:22:23.414886 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:22:23.414893 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:22:23.414900 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:22:23.414906 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:23.414922 systemd-journald[1348]: Collecting audit messages is disabled. Jan 30 13:22:23.414939 systemd-journald[1348]: Journal started Jan 30 13:22:23.414953 systemd-journald[1348]: Runtime Journal (/run/log/journal/44a86b2876b24d82ae72cecaf68cfff9) is 8.0M, max 639.9M, 631.9M free. Jan 30 13:22:22.295164 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:22:22.309319 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Jan 30 13:22:22.309563 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:22:23.427529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:23.455673 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:22:23.477570 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:22:23.498547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:23.519730 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:22:23.519755 systemd[1]: Stopped verity-setup.service. Jan 30 13:22:23.545535 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:22:23.545576 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:23.562916 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:22:23.573807 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:22:23.584766 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:22:23.594755 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:22:23.604765 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:22:23.614725 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:22:23.624812 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:22:23.635831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:23.646938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:22:23.647095 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:22:23.659057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:23.659271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:23.671350 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:23.671756 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:23.682434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:23.682847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:23.694431 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:22:23.694838 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:22:23.706425 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:23.706835 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:23.718427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:23.730403 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:22:23.742353 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:22:23.754394 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:23.790851 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:22:23.815730 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:22:23.827727 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:22:23.837723 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:22:23.837748 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:23.848299 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:22:23.866785 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:22:23.878523 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:22:23.889756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:23.891552 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:22:23.902113 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:22:23.912628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:23.913439 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:22:23.920753 systemd-journald[1348]: Time spent on flushing to /var/log/journal/44a86b2876b24d82ae72cecaf68cfff9 is 12.797ms for 1362 entries. Jan 30 13:22:23.920753 systemd-journald[1348]: System Journal (/var/log/journal/44a86b2876b24d82ae72cecaf68cfff9) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:22:23.945625 systemd-journald[1348]: Received client request to flush runtime journal. Jan 30 13:22:23.929284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:23.930145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:23.948022 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:22:23.960243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:23.970544 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 13:22:23.977304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:22:23.989325 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 30 13:22:23.989334 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 30 13:22:23.993475 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:22:23.993584 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:22:24.005678 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:22:24.016710 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:22:24.027712 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:22:24.039549 kernel: loop1: detected capacity change from 0 to 141000 Jan 30 13:22:24.044850 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:22:24.056670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:24.066700 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:24.080666 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:22:24.095532 kernel: loop2: detected capacity change from 0 to 8 Jan 30 13:22:24.107797 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:22:24.119273 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:22:24.129067 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:22:24.129542 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:22:24.146488 kernel: loop3: detected capacity change from 0 to 138184 Jan 30 13:22:24.146790 udevadm[1384]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:22:24.150009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:22:24.170657 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:24.178263 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jan 30 13:22:24.178273 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jan 30 13:22:24.181871 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:24.219486 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 13:22:24.243541 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 13:22:24.262391 ldconfig[1374]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:22:24.263553 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:22:24.267519 kernel: loop6: detected capacity change from 0 to 8 Jan 30 13:22:24.274547 kernel: loop7: detected capacity change from 0 to 138184 Jan 30 13:22:24.287446 (sd-merge)[1407]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 30 13:22:24.287743 (sd-merge)[1407]: Merged extensions into '/usr'. Jan 30 13:22:24.290206 systemd[1]: Reloading requested from client PID 1380 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:22:24.290213 systemd[1]: Reloading... Jan 30 13:22:24.315493 zram_generator::config[1432]: No configuration found. Jan 30 13:22:24.386240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:24.425623 systemd[1]: Reloading finished in 135 ms. Jan 30 13:22:24.452661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:22:24.463834 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:22:24.486728 systemd[1]: Starting ensure-sysext.service... Jan 30 13:22:24.494491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:24.506875 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:24.521186 systemd[1]: Reloading requested from client PID 1489 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:22:24.521201 systemd[1]: Reloading... Jan 30 13:22:24.523801 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:22:24.523979 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:22:24.524687 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:22:24.524912 systemd-tmpfiles[1490]: ACLs are not supported, ignoring. Jan 30 13:22:24.524963 systemd-tmpfiles[1490]: ACLs are not supported, ignoring. Jan 30 13:22:24.526952 systemd-tmpfiles[1490]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:24.526956 systemd-tmpfiles[1490]: Skipping /boot Jan 30 13:22:24.532482 systemd-tmpfiles[1490]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:24.532486 systemd-tmpfiles[1490]: Skipping /boot Jan 30 13:22:24.535674 systemd-udevd[1491]: Using default interface naming scheme 'v255'. Jan 30 13:22:24.548535 zram_generator::config[1518]: No configuration found. Jan 30 13:22:24.590128 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 30 13:22:24.590347 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (1539) Jan 30 13:22:24.590366 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 13:22:24.600117 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:22:24.600166 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:22:24.642488 kernel: IPMI message handler: version 39.2 Jan 30 13:22:24.642561 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 30 13:22:24.662274 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 30 13:22:24.662415 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:22:24.662438 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 30 13:22:24.676761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:24.684486 kernel: iTCO_vendor_support: vendor-support=0 Jan 30 13:22:24.684516 kernel: ipmi device interface Jan 30 13:22:24.701540 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 30 13:22:24.701818 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 30 13:22:24.713391 kernel: ipmi_si: IPMI System Interface driver Jan 30 13:22:24.720541 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 30 13:22:24.734729 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 30 13:22:24.734743 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 30 13:22:24.734753 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 30 13:22:24.758654 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 30 13:22:24.758736 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 30 13:22:24.758810 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 30 13:22:24.758825 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 30 13:22:24.751642 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 30 13:22:24.752047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 13:22:24.793133 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 30 13:22:24.793499 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 30 13:22:24.799739 systemd[1]: Reloading finished in 278 ms. Jan 30 13:22:24.823393 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:24.827821 kernel: intel_rapl_common: Found RAPL domain package Jan 30 13:22:24.827843 kernel: intel_rapl_common: Found RAPL domain core Jan 30 13:22:24.828483 kernel: intel_rapl_common: Found RAPL domain dram Jan 30 13:22:24.845483 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 30 13:22:24.863772 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:24.884517 systemd[1]: Finished ensure-sysext.service. Jan 30 13:22:24.886520 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 30 13:22:24.905456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:22:24.920603 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:22:24.930355 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:22:24.940213 augenrules[1690]: No rules Jan 30 13:22:24.942626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:24.943226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:24.954484 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 30 13:22:24.961508 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 30 13:22:24.974908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:24.986107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:24.998045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:25.007641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:25.008230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:22:25.033872 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:22:25.045426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:25.046455 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:25.047388 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:22:25.084633 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:22:25.096150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:25.105566 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:22:25.106065 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:22:25.116719 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:22:25.116805 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:22:25.117058 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:22:25.117198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:25.117268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:25.117408 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:25.117474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:25.117620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:25.117684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:25.117818 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:25.117880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:25.118013 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:22:25.118152 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:22:25.123113 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:22:25.136742 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:22:25.136777 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:25.136812 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:25.137473 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:22:25.138416 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:22:25.138447 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:22:25.143895 lvm[1719]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:25.146407 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:22:25.158865 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:22:25.178130 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:22:25.190441 systemd-resolved[1703]: Positive Trust Anchors: Jan 30 13:22:25.190451 systemd-resolved[1703]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:25.190487 systemd-resolved[1703]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:25.193841 systemd-resolved[1703]: Using system hostname 'ci-4186.1.0-a-9d6a1ac7ae'. Jan 30 13:22:25.197057 systemd-networkd[1702]: lo: Link UP Jan 30 13:22:25.197060 systemd-networkd[1702]: lo: Gained carrier Jan 30 13:22:25.199468 systemd-networkd[1702]: bond0: netdev ready Jan 30 13:22:25.200435 systemd-networkd[1702]: Enumeration completed Jan 30 13:22:25.205105 systemd-networkd[1702]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:d5:7c.network. Jan 30 13:22:25.246687 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:22:25.257796 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:25.267602 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:25.277710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:25.289788 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:25.299558 systemd[1]: Reached target network.target - Network. Jan 30 13:22:25.307551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:25.318526 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:25.328573 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:22:25.339539 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:22:25.350520 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:22:25.362512 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:22:25.362529 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:25.370627 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:22:25.381248 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:22:25.391035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:22:25.402818 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:25.413546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:22:25.428691 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:22:25.439129 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:22:25.449379 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:22:25.466449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:22:25.466511 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 13:22:25.469316 lvm[1743]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:25.479488 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 30 13:22:25.484166 systemd-networkd[1702]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:d5:7d.network. Jan 30 13:22:25.486122 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:22:25.495730 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:25.505602 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:25.514617 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:25.514632 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:25.534642 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:22:25.545489 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:22:25.556209 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:22:25.565247 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:22:25.570374 coreos-metadata[1746]: Jan 30 13:22:25.570 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:22:25.571319 coreos-metadata[1746]: Jan 30 13:22:25.571 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jan 30 13:22:25.575213 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:22:25.576916 jq[1750]: false Jan 30 13:22:25.577230 dbus-daemon[1747]: [system] SELinux support is enabled Jan 30 13:22:25.584708 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:22:25.585382 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:22:25.593647 extend-filesystems[1752]: Found loop4 Jan 30 13:22:25.593647 extend-filesystems[1752]: Found loop5 Jan 30 13:22:25.659611 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jan 30 13:22:25.659628 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (1637) Jan 30 13:22:25.659638 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 13:22:25.659757 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 30 13:22:25.659768 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 30 13:22:25.595384 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:22:25.659825 extend-filesystems[1752]: Found loop6 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found loop7 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sda Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb1 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb2 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb3 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found usr Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb4 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb6 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb7 Jan 30 13:22:25.659825 extend-filesystems[1752]: Found sdb9 Jan 30 13:22:25.659825 extend-filesystems[1752]: Checking size of /dev/sdb9 Jan 30 13:22:25.659825 extend-filesystems[1752]: Resized partition /dev/sdb9 Jan 30 13:22:25.813582 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 30 13:22:25.813612 kernel: bond0: active interface up! Jan 30 13:22:25.637614 systemd-networkd[1702]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 30 13:22:25.813746 extend-filesystems[1762]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:22:25.639102 systemd-networkd[1702]: enp1s0f0np0: Link UP Jan 30 13:22:25.639288 systemd-networkd[1702]: enp1s0f0np0: Gained carrier Jan 30 13:22:25.655653 systemd-networkd[1702]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:d5:7c.network. Jan 30 13:22:25.655820 systemd-networkd[1702]: enp1s0f1np1: Link UP Jan 30 13:22:25.838859 sshd_keygen[1775]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:22:25.655983 systemd-networkd[1702]: enp1s0f1np1: Gained carrier Jan 30 13:22:25.838938 update_engine[1777]: I20250130 13:22:25.776965 1777 main.cc:92] Flatcar Update Engine starting Jan 30 13:22:25.838938 update_engine[1777]: I20250130 13:22:25.777643 1777 update_check_scheduler.cc:74] Next update check in 7m40s Jan 30 13:22:25.663104 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:22:25.839153 jq[1778]: true Jan 30 13:22:25.677650 systemd-networkd[1702]: bond0: Link UP Jan 30 13:22:25.677832 systemd-networkd[1702]: bond0: Gained carrier Jan 30 13:22:25.677950 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:25.678265 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:25.678458 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:25.678548 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:25.683191 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:22:25.691090 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:22:25.723437 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 30 13:22:25.744895 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:22:25.745283 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:22:25.751798 systemd-logind[1772]: Watching system buttons on /dev/input/event3 (Power Button) Jan 30 13:22:25.751809 systemd-logind[1772]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:22:25.751820 systemd-logind[1772]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 30 13:22:25.761593 systemd-logind[1772]: New seat seat0. Jan 30 13:22:25.762345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:22:25.784769 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:22:25.807205 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:22:25.829838 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:22:25.857782 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:22:25.857906 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:22:25.858120 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:22:25.858229 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:22:25.870483 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 30 13:22:25.877004 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:22:25.877124 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:22:25.888718 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:22:25.902543 (ntainerd)[1790]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:22:25.903941 jq[1789]: true Jan 30 13:22:25.906657 dbus-daemon[1747]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:22:25.908090 tar[1787]: linux-amd64/helm Jan 30 13:22:25.911955 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 30 13:22:25.912060 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 30 13:22:25.916836 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:22:25.950727 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:22:25.958548 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:22:25.958652 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:22:25.968400 bash[1819]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:22:25.969599 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:22:25.969727 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:22:25.988672 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:22:26.001845 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:22:26.007973 locksmithd[1826]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:22:26.013775 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:22:26.013867 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:22:26.038680 systemd[1]: Starting sshkeys.service... Jan 30 13:22:26.046312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:22:26.071662 containerd[1790]: time="2025-01-30T13:22:26.071558506Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:22:26.071843 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:22:26.083383 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:22:26.084145 containerd[1790]: time="2025-01-30T13:22:26.084125418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:26.084858 containerd[1790]: time="2025-01-30T13:22:26.084841133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:26.084901 containerd[1790]: time="2025-01-30T13:22:26.084858032Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:22:26.084901 containerd[1790]: time="2025-01-30T13:22:26.084882304Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:22:26.084993 containerd[1790]: time="2025-01-30T13:22:26.084983630Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:22:26.085026 containerd[1790]: time="2025-01-30T13:22:26.084996881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085063 containerd[1790]: time="2025-01-30T13:22:26.085050064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085090 containerd[1790]: time="2025-01-30T13:22:26.085063301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085187 containerd[1790]: time="2025-01-30T13:22:26.085176359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085217 containerd[1790]: time="2025-01-30T13:22:26.085186573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085217 containerd[1790]: time="2025-01-30T13:22:26.085198569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085217 containerd[1790]: time="2025-01-30T13:22:26.085208799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085291 containerd[1790]: time="2025-01-30T13:22:26.085270502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085468 containerd[1790]: time="2025-01-30T13:22:26.085458736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085546 containerd[1790]: time="2025-01-30T13:22:26.085535781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:26.085576 containerd[1790]: time="2025-01-30T13:22:26.085545933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:22:26.085614 containerd[1790]: time="2025-01-30T13:22:26.085605887Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:22:26.085653 containerd[1790]: time="2025-01-30T13:22:26.085644801Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:22:26.094996 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:22:26.097272 containerd[1790]: time="2025-01-30T13:22:26.097258283Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:22:26.097303 containerd[1790]: time="2025-01-30T13:22:26.097286473Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:22:26.097303 containerd[1790]: time="2025-01-30T13:22:26.097296775Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:22:26.097342 containerd[1790]: time="2025-01-30T13:22:26.097306359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:22:26.097342 containerd[1790]: time="2025-01-30T13:22:26.097314990Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:22:26.097398 containerd[1790]: time="2025-01-30T13:22:26.097390139Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:22:26.097529 containerd[1790]: time="2025-01-30T13:22:26.097520773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:22:26.097584 containerd[1790]: time="2025-01-30T13:22:26.097576426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:22:26.097603 containerd[1790]: time="2025-01-30T13:22:26.097586029Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:22:26.097603 containerd[1790]: time="2025-01-30T13:22:26.097594616Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:22:26.097638 containerd[1790]: time="2025-01-30T13:22:26.097602403Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097638 containerd[1790]: time="2025-01-30T13:22:26.097609789Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097638 containerd[1790]: time="2025-01-30T13:22:26.097616737Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097638 containerd[1790]: time="2025-01-30T13:22:26.097629213Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097638 containerd[1790]: time="2025-01-30T13:22:26.097637352Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097715 containerd[1790]: time="2025-01-30T13:22:26.097644872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097801 containerd[1790]: time="2025-01-30T13:22:26.097762305Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097801 containerd[1790]: time="2025-01-30T13:22:26.097785692Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:22:26.097845 containerd[1790]: time="2025-01-30T13:22:26.097817789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.097845 containerd[1790]: time="2025-01-30T13:22:26.097836769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.097885 containerd[1790]: time="2025-01-30T13:22:26.097856911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.097885 containerd[1790]: time="2025-01-30T13:22:26.097875697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.097922 containerd[1790]: time="2025-01-30T13:22:26.097889725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098011 containerd[1790]: time="2025-01-30T13:22:26.097909395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098030 containerd[1790]: time="2025-01-30T13:22:26.098019093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098045 containerd[1790]: time="2025-01-30T13:22:26.098033912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098128 containerd[1790]: time="2025-01-30T13:22:26.098117323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098150 containerd[1790]: time="2025-01-30T13:22:26.098133561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098150 containerd[1790]: time="2025-01-30T13:22:26.098141694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098179 containerd[1790]: time="2025-01-30T13:22:26.098149372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098179 containerd[1790]: time="2025-01-30T13:22:26.098156488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098179 containerd[1790]: time="2025-01-30T13:22:26.098164931Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:22:26.098229 containerd[1790]: time="2025-01-30T13:22:26.098179944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098229 containerd[1790]: time="2025-01-30T13:22:26.098187895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098229 containerd[1790]: time="2025-01-30T13:22:26.098193968Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:22:26.098559 containerd[1790]: time="2025-01-30T13:22:26.098549892Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:22:26.098589 containerd[1790]: time="2025-01-30T13:22:26.098564528Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:22:26.098589 containerd[1790]: time="2025-01-30T13:22:26.098571706Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:22:26.098589 containerd[1790]: time="2025-01-30T13:22:26.098578796Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:22:26.098589 containerd[1790]: time="2025-01-30T13:22:26.098584044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098651 containerd[1790]: time="2025-01-30T13:22:26.098591234Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:22:26.098651 containerd[1790]: time="2025-01-30T13:22:26.098596996Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:22:26.098651 containerd[1790]: time="2025-01-30T13:22:26.098602856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:22:26.098808 containerd[1790]: time="2025-01-30T13:22:26.098781882Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:22:26.098891 containerd[1790]: time="2025-01-30T13:22:26.098812040Z" level=info msg="Connect containerd service" Jan 30 13:22:26.098891 containerd[1790]: time="2025-01-30T13:22:26.098831701Z" level=info msg="using legacy CRI server" Jan 30 13:22:26.098891 containerd[1790]: time="2025-01-30T13:22:26.098836667Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:22:26.098940 containerd[1790]: time="2025-01-30T13:22:26.098913609Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:22:26.099232 containerd[1790]: time="2025-01-30T13:22:26.099220933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:22:26.099335 containerd[1790]: time="2025-01-30T13:22:26.099317498Z" level=info msg="Start subscribing containerd event" Jan 30 13:22:26.099354 containerd[1790]: time="2025-01-30T13:22:26.099345376Z" level=info msg="Start recovering state" Jan 30 13:22:26.099393 containerd[1790]: time="2025-01-30T13:22:26.099384493Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:22:26.099417 containerd[1790]: time="2025-01-30T13:22:26.099410921Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:22:26.099437 containerd[1790]: time="2025-01-30T13:22:26.099386262Z" level=info msg="Start event monitor" Jan 30 13:22:26.099460 containerd[1790]: time="2025-01-30T13:22:26.099442887Z" level=info msg="Start snapshots syncer" Jan 30 13:22:26.099460 containerd[1790]: time="2025-01-30T13:22:26.099450580Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:22:26.099460 containerd[1790]: time="2025-01-30T13:22:26.099454854Z" level=info msg="Start streaming server" Jan 30 13:22:26.099524 containerd[1790]: time="2025-01-30T13:22:26.099494491Z" level=info msg="containerd successfully booted in 0.028343s" Jan 30 13:22:26.105907 coreos-metadata[1848]: Jan 30 13:22:26.105 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:22:26.106887 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:22:26.121485 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jan 30 13:22:26.134805 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:22:26.141430 extend-filesystems[1762]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jan 30 13:22:26.141430 extend-filesystems[1762]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 30 13:22:26.141430 extend-filesystems[1762]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jan 30 13:22:26.183578 extend-filesystems[1752]: Resized filesystem in /dev/sdb9 Jan 30 13:22:26.144285 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 30 13:22:26.172783 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:22:26.185694 tar[1787]: linux-amd64/LICENSE Jan 30 13:22:26.185730 tar[1787]: linux-amd64/README.md Jan 30 13:22:26.199156 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:22:26.199246 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:22:26.219546 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:22:26.571549 coreos-metadata[1746]: Jan 30 13:22:26.571 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jan 30 13:22:26.746646 systemd-networkd[1702]: bond0: Gained IPv6LL Jan 30 13:22:26.746957 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:27.194847 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:27.194968 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:27.196032 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:22:27.207257 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:22:27.232683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:27.243192 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:22:27.261233 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:22:27.849449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:27.878721 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:22:28.290702 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 30 13:22:28.290886 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 30 13:22:28.345710 kubelet[1884]: E0130 13:22:28.345686 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:22:28.346889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:22:28.346962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:22:29.730119 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:22:29.746822 systemd[1]: Started sshd@0-147.75.90.199:22-139.178.89.65:47882.service - OpenSSH per-connection server daemon (139.178.89.65:47882). Jan 30 13:22:29.804905 sshd[1905]: Accepted publickey for core from 139.178.89.65 port 47882 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:29.805599 sshd-session[1905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:29.811386 systemd-logind[1772]: New session 1 of user core. Jan 30 13:22:29.812165 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:22:29.839975 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:22:29.857281 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:22:29.871236 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:22:29.882460 (systemd)[1909]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:22:29.958203 coreos-metadata[1746]: Jan 30 13:22:29.958 INFO Fetch successful Jan 30 13:22:29.959821 systemd[1909]: Queued start job for default target default.target. Jan 30 13:22:29.973609 coreos-metadata[1848]: Jan 30 13:22:29.963 INFO Fetch successful Jan 30 13:22:29.973959 systemd[1909]: Created slice app.slice - User Application Slice. Jan 30 13:22:29.973974 systemd[1909]: Reached target paths.target - Paths. Jan 30 13:22:29.973983 systemd[1909]: Reached target timers.target - Timers. Jan 30 13:22:29.974701 systemd[1909]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:22:29.980898 systemd[1909]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:22:29.980926 systemd[1909]: Reached target sockets.target - Sockets. Jan 30 13:22:29.980935 systemd[1909]: Reached target basic.target - Basic System. Jan 30 13:22:29.980956 systemd[1909]: Reached target default.target - Main User Target. Jan 30 13:22:29.980972 systemd[1909]: Startup finished in 94ms. Jan 30 13:22:29.981081 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:22:29.992197 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:22:29.996379 unknown[1848]: wrote ssh authorized keys file for user: core Jan 30 13:22:30.008637 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:22:30.016119 update-ssh-keys[1919]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:22:30.019192 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:22:30.031479 systemd[1]: Finished sshkeys.service. Jan 30 13:22:30.052688 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 30 13:22:30.065367 systemd[1]: Started sshd@1-147.75.90.199:22-139.178.89.65:47890.service - OpenSSH per-connection server daemon (139.178.89.65:47890). Jan 30 13:22:30.104249 sshd[1931]: Accepted publickey for core from 139.178.89.65 port 47890 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:30.104869 sshd-session[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:30.107283 systemd-logind[1772]: New session 2 of user core. Jan 30 13:22:30.117629 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:22:30.190240 sshd[1933]: Connection closed by 139.178.89.65 port 47890 Jan 30 13:22:30.190967 sshd-session[1931]: pam_unix(sshd:session): session closed for user core Jan 30 13:22:30.215945 systemd[1]: sshd@1-147.75.90.199:22-139.178.89.65:47890.service: Deactivated successfully. Jan 30 13:22:30.219987 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:22:30.223410 systemd-logind[1772]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:22:30.243429 systemd[1]: Started sshd@2-147.75.90.199:22-139.178.89.65:47902.service - OpenSSH per-connection server daemon (139.178.89.65:47902). Jan 30 13:22:30.257936 systemd-logind[1772]: Removed session 2. Jan 30 13:22:30.293483 sshd[1938]: Accepted publickey for core from 139.178.89.65 port 47902 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:30.294069 sshd-session[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:30.296451 systemd-logind[1772]: New session 3 of user core. Jan 30 13:22:30.306757 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:22:30.369679 sshd[1940]: Connection closed by 139.178.89.65 port 47902 Jan 30 13:22:30.369825 sshd-session[1938]: pam_unix(sshd:session): session closed for user core Jan 30 13:22:30.371561 systemd[1]: sshd@2-147.75.90.199:22-139.178.89.65:47902.service: Deactivated successfully. Jan 30 13:22:30.372361 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:22:30.372748 systemd-logind[1772]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:22:30.373317 systemd-logind[1772]: Removed session 3. Jan 30 13:22:30.396592 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 30 13:22:30.408736 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:22:30.418912 systemd[1]: Startup finished in 2.677s (kernel) + 20.055s (initrd) + 8.599s (userspace) = 31.333s. Jan 30 13:22:30.436924 agetty[1861]: failed to open credentials directory Jan 30 13:22:30.436940 agetty[1863]: failed to open credentials directory Jan 30 13:22:30.441398 login[1863]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:22:30.444168 systemd-logind[1772]: New session 4 of user core. Jan 30 13:22:30.444724 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:22:30.446818 login[1861]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:22:30.448970 systemd-logind[1772]: New session 5 of user core. Jan 30 13:22:30.449627 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:22:38.598933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:22:38.619792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:38.815923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:38.819794 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:22:38.855138 kubelet[1979]: E0130 13:22:38.855062 1979 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:22:38.857220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:22:38.857306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:22:40.385851 systemd[1]: Started sshd@3-147.75.90.199:22-139.178.89.65:40688.service - OpenSSH per-connection server daemon (139.178.89.65:40688). Jan 30 13:22:40.417945 sshd[1997]: Accepted publickey for core from 139.178.89.65 port 40688 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:40.418632 sshd-session[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:40.421120 systemd-logind[1772]: New session 6 of user core. Jan 30 13:22:40.429787 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:22:40.479007 sshd[1999]: Connection closed by 139.178.89.65 port 40688 Jan 30 13:22:40.479267 sshd-session[1997]: pam_unix(sshd:session): session closed for user core Jan 30 13:22:40.493897 systemd[1]: sshd@3-147.75.90.199:22-139.178.89.65:40688.service: Deactivated successfully. Jan 30 13:22:40.495836 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:22:40.496890 systemd-logind[1772]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:22:40.497486 systemd[1]: Started sshd@4-147.75.90.199:22-139.178.89.65:40692.service - OpenSSH per-connection server daemon (139.178.89.65:40692). Jan 30 13:22:40.497925 systemd-logind[1772]: Removed session 6. Jan 30 13:22:40.527693 sshd[2004]: Accepted publickey for core from 139.178.89.65 port 40692 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:40.528283 sshd-session[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:40.530994 systemd-logind[1772]: New session 7 of user core. Jan 30 13:22:40.547172 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:22:40.605013 sshd[2006]: Connection closed by 139.178.89.65 port 40692 Jan 30 13:22:40.605156 sshd-session[2004]: pam_unix(sshd:session): session closed for user core Jan 30 13:22:40.616096 systemd[1]: sshd@4-147.75.90.199:22-139.178.89.65:40692.service: Deactivated successfully. Jan 30 13:22:40.616840 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:22:40.617485 systemd-logind[1772]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:22:40.618195 systemd[1]: Started sshd@5-147.75.90.199:22-139.178.89.65:40694.service - OpenSSH per-connection server daemon (139.178.89.65:40694). Jan 30 13:22:40.618706 systemd-logind[1772]: Removed session 7. Jan 30 13:22:40.656781 sshd[2011]: Accepted publickey for core from 139.178.89.65 port 40694 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:40.657498 sshd-session[2011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:40.660349 systemd-logind[1772]: New session 8 of user core. Jan 30 13:22:40.670731 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:22:40.723484 sshd[2013]: Connection closed by 139.178.89.65 port 40694 Jan 30 13:22:40.723651 sshd-session[2011]: pam_unix(sshd:session): session closed for user core Jan 30 13:22:40.739217 systemd[1]: sshd@5-147.75.90.199:22-139.178.89.65:40694.service: Deactivated successfully. Jan 30 13:22:40.740093 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:22:40.740932 systemd-logind[1772]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:22:40.741620 systemd[1]: Started sshd@6-147.75.90.199:22-139.178.89.65:40696.service - OpenSSH per-connection server daemon (139.178.89.65:40696). Jan 30 13:22:40.742280 systemd-logind[1772]: Removed session 8. Jan 30 13:22:40.789848 sshd[2018]: Accepted publickey for core from 139.178.89.65 port 40696 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:40.790768 sshd-session[2018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:40.794230 systemd-logind[1772]: New session 9 of user core. Jan 30 13:22:40.804699 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:22:40.869699 sudo[2021]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:22:40.869846 sudo[2021]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:22:40.882171 sudo[2021]: pam_unix(sudo:session): session closed for user root Jan 30 13:22:40.882973 sshd[2020]: Connection closed by 139.178.89.65 port 40696 Jan 30 13:22:40.883165 sshd-session[2018]: pam_unix(sshd:session): session closed for user core Jan 30 13:22:40.900595 systemd[1]: sshd@6-147.75.90.199:22-139.178.89.65:40696.service: Deactivated successfully. Jan 30 13:22:40.901626 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:22:40.902578 systemd-logind[1772]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:22:40.903444 systemd[1]: Started sshd@7-147.75.90.199:22-139.178.89.65:35922.service - OpenSSH per-connection server daemon (139.178.89.65:35922). Jan 30 13:22:40.904201 systemd-logind[1772]: Removed session 9. Jan 30 13:22:40.956880 sshd[2026]: Accepted publickey for core from 139.178.89.65 port 35922 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:40.957835 sshd-session[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:40.961558 systemd-logind[1772]: New session 10 of user core. Jan 30 13:22:40.969731 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:22:41.026171 sudo[2030]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:22:41.026376 sudo[2030]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:22:41.028292 sudo[2030]: pam_unix(sudo:session): session closed for user root Jan 30 13:22:41.030819 sudo[2029]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:22:41.030961 sudo[2029]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:22:41.046838 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:22:41.067432 augenrules[2052]: No rules Jan 30 13:22:41.068056 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:22:41.068231 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:22:41.069118 sudo[2029]: pam_unix(sudo:session): session closed for user root Jan 30 13:22:41.070244 sshd[2028]: Connection closed by 139.178.89.65 port 35922 Jan 30 13:22:41.070550 sshd-session[2026]: pam_unix(sshd:session): session closed for user core Jan 30 13:22:41.074166 systemd[1]: sshd@7-147.75.90.199:22-139.178.89.65:35922.service: Deactivated successfully. Jan 30 13:22:41.075711 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:22:41.076624 systemd-logind[1772]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:22:41.078716 systemd[1]: Started sshd@8-147.75.90.199:22-139.178.89.65:35936.service - OpenSSH per-connection server daemon (139.178.89.65:35936). Jan 30 13:22:41.079889 systemd-logind[1772]: Removed session 10. Jan 30 13:22:41.112529 sshd[2060]: Accepted publickey for core from 139.178.89.65 port 35936 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:22:41.115674 sshd-session[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:22:41.127069 systemd-logind[1772]: New session 11 of user core. Jan 30 13:22:41.141919 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:22:41.201155 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:22:41.201307 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:22:41.485849 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:22:41.485931 (dockerd)[2091]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:22:41.782627 dockerd[2091]: time="2025-01-30T13:22:41.782535943Z" level=info msg="Starting up" Jan 30 13:22:41.846891 dockerd[2091]: time="2025-01-30T13:22:41.846827797Z" level=info msg="Loading containers: start." Jan 30 13:22:41.965530 kernel: Initializing XFRM netlink socket Jan 30 13:22:41.980527 systemd-timesyncd[1704]: Network configuration changed, trying to establish connection. Jan 30 13:22:42.025445 systemd-networkd[1702]: docker0: Link UP Jan 30 13:22:42.048408 dockerd[2091]: time="2025-01-30T13:22:42.048341750Z" level=info msg="Loading containers: done." Jan 30 13:22:42.057051 dockerd[2091]: time="2025-01-30T13:22:42.057004197Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:22:42.057051 dockerd[2091]: time="2025-01-30T13:22:42.057051096Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:22:42.057146 dockerd[2091]: time="2025-01-30T13:22:42.057098200Z" level=info msg="Daemon has completed initialization" Jan 30 13:22:42.070398 dockerd[2091]: time="2025-01-30T13:22:42.070341028Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:22:42.070403 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:22:42.313489 systemd-timesyncd[1704]: Contacted time server [2603:c020:0:8369::f00d:feed]:123 (2.flatcar.pool.ntp.org). Jan 30 13:22:42.313545 systemd-timesyncd[1704]: Initial clock synchronization to Thu 2025-01-30 13:22:42.405134 UTC. Jan 30 13:22:42.950531 containerd[1790]: time="2025-01-30T13:22:42.950463577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:22:43.449252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710889210.mount: Deactivated successfully. Jan 30 13:22:44.282746 containerd[1790]: time="2025-01-30T13:22:44.282688569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:44.282955 containerd[1790]: time="2025-01-30T13:22:44.282877063Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:22:44.283299 containerd[1790]: time="2025-01-30T13:22:44.283259099Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:44.285229 containerd[1790]: time="2025-01-30T13:22:44.285189030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:44.285731 containerd[1790]: time="2025-01-30T13:22:44.285682608Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.335173828s" Jan 30 13:22:44.285731 containerd[1790]: time="2025-01-30T13:22:44.285700288Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:22:44.296332 containerd[1790]: time="2025-01-30T13:22:44.296311736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:22:45.410680 containerd[1790]: time="2025-01-30T13:22:45.410626036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:45.410897 containerd[1790]: time="2025-01-30T13:22:45.410857583Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:22:45.411244 containerd[1790]: time="2025-01-30T13:22:45.411200861Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:45.428410 containerd[1790]: time="2025-01-30T13:22:45.428360526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:45.428986 containerd[1790]: time="2025-01-30T13:22:45.428941702Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.132609528s" Jan 30 13:22:45.428986 containerd[1790]: time="2025-01-30T13:22:45.428960255Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:22:45.440436 containerd[1790]: time="2025-01-30T13:22:45.440389267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:22:46.251688 containerd[1790]: time="2025-01-30T13:22:46.251633656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:46.251889 containerd[1790]: time="2025-01-30T13:22:46.251835776Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:22:46.252314 containerd[1790]: time="2025-01-30T13:22:46.252271239Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:46.253901 containerd[1790]: time="2025-01-30T13:22:46.253860107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:46.254949 containerd[1790]: time="2025-01-30T13:22:46.254907727Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 814.499513ms" Jan 30 13:22:46.254949 containerd[1790]: time="2025-01-30T13:22:46.254923048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:22:46.266383 containerd[1790]: time="2025-01-30T13:22:46.266363482Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:22:46.998206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759738893.mount: Deactivated successfully. Jan 30 13:22:47.175622 containerd[1790]: time="2025-01-30T13:22:47.175596743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:47.175837 containerd[1790]: time="2025-01-30T13:22:47.175822626Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:22:47.176119 containerd[1790]: time="2025-01-30T13:22:47.176107242Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:47.177048 containerd[1790]: time="2025-01-30T13:22:47.177036894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:47.177460 containerd[1790]: time="2025-01-30T13:22:47.177447227Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 911.061399ms" Jan 30 13:22:47.177489 containerd[1790]: time="2025-01-30T13:22:47.177463221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:22:47.188617 containerd[1790]: time="2025-01-30T13:22:47.188565160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:22:47.692250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22725595.mount: Deactivated successfully. Jan 30 13:22:48.188459 containerd[1790]: time="2025-01-30T13:22:48.188404444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:48.188673 containerd[1790]: time="2025-01-30T13:22:48.188543222Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:22:48.189019 containerd[1790]: time="2025-01-30T13:22:48.188977426Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:48.190799 containerd[1790]: time="2025-01-30T13:22:48.190762014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:48.191336 containerd[1790]: time="2025-01-30T13:22:48.191294774Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.002711397s" Jan 30 13:22:48.191336 containerd[1790]: time="2025-01-30T13:22:48.191310132Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:22:48.202517 containerd[1790]: time="2025-01-30T13:22:48.202468685Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:22:48.674068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505094380.mount: Deactivated successfully. Jan 30 13:22:48.694005 containerd[1790]: time="2025-01-30T13:22:48.693919848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:48.694185 containerd[1790]: time="2025-01-30T13:22:48.694163623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:22:48.694656 containerd[1790]: time="2025-01-30T13:22:48.694643210Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:48.695812 containerd[1790]: time="2025-01-30T13:22:48.695797369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:48.696368 containerd[1790]: time="2025-01-30T13:22:48.696356810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 493.838582ms" Jan 30 13:22:48.696401 containerd[1790]: time="2025-01-30T13:22:48.696370900Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:22:48.707580 containerd[1790]: time="2025-01-30T13:22:48.707537106Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:22:49.075842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:22:49.089794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:49.297021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:49.299480 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:22:49.321357 kubelet[2515]: E0130 13:22:49.321287 2515 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:22:49.322819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:22:49.322901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:22:49.377892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384245186.mount: Deactivated successfully. Jan 30 13:22:50.471194 containerd[1790]: time="2025-01-30T13:22:50.471164069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:50.471411 containerd[1790]: time="2025-01-30T13:22:50.471347749Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:22:50.471931 containerd[1790]: time="2025-01-30T13:22:50.471919797Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:50.473582 containerd[1790]: time="2025-01-30T13:22:50.473543576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:22:50.474747 containerd[1790]: time="2025-01-30T13:22:50.474707270Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.767147381s" Jan 30 13:22:50.474747 containerd[1790]: time="2025-01-30T13:22:50.474721896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:22:52.569179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:52.582887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:52.592147 systemd[1]: Reloading requested from client PID 2730 ('systemctl') (unit session-11.scope)... Jan 30 13:22:52.592154 systemd[1]: Reloading... Jan 30 13:22:52.630518 zram_generator::config[2769]: No configuration found. Jan 30 13:22:52.697141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:52.758685 systemd[1]: Reloading finished in 166 ms. Jan 30 13:22:52.795974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:52.797466 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:52.798309 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:22:52.798411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:52.799238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:53.036151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:53.042524 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:22:53.069518 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:22:53.069518 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:22:53.069518 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:22:53.070504 kubelet[2838]: I0130 13:22:53.070455 2838 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:22:53.261443 kubelet[2838]: I0130 13:22:53.261402 2838 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:22:53.261443 kubelet[2838]: I0130 13:22:53.261412 2838 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:22:53.261585 kubelet[2838]: I0130 13:22:53.261548 2838 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:22:53.273975 kubelet[2838]: I0130 13:22:53.273907 2838 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:22:53.274811 kubelet[2838]: E0130 13:22:53.274773 2838 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.90.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.288826 kubelet[2838]: I0130 13:22:53.288793 2838 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:22:53.290619 kubelet[2838]: I0130 13:22:53.290548 2838 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:22:53.290763 kubelet[2838]: I0130 13:22:53.290591 2838 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-9d6a1ac7ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:22:53.290763 kubelet[2838]: I0130 13:22:53.290738 2838 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:22:53.290763 kubelet[2838]: I0130 13:22:53.290760 2838 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:22:53.290860 kubelet[2838]: I0130 13:22:53.290818 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:22:53.291541 kubelet[2838]: I0130 13:22:53.291510 2838 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:22:53.291623 kubelet[2838]: I0130 13:22:53.291544 2838 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:22:53.291623 kubelet[2838]: I0130 13:22:53.291574 2838 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:22:53.291623 kubelet[2838]: I0130 13:22:53.291595 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:22:53.294490 kubelet[2838]: W0130 13:22:53.294453 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-9d6a1ac7ae&limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.294490 kubelet[2838]: W0130 13:22:53.294466 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.294563 kubelet[2838]: E0130 13:22:53.294510 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.90.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.294563 kubelet[2838]: E0130 13:22:53.294515 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.90.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-9d6a1ac7ae&limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.295249 kubelet[2838]: I0130 13:22:53.295210 2838 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:22:53.296365 kubelet[2838]: I0130 13:22:53.296317 2838 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:22:53.296418 kubelet[2838]: W0130 13:22:53.296397 2838 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:22:53.296898 kubelet[2838]: I0130 13:22:53.296842 2838 server.go:1264] "Started kubelet" Jan 30 13:22:53.297016 kubelet[2838]: I0130 13:22:53.296944 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:22:53.297016 kubelet[2838]: I0130 13:22:53.296971 2838 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:22:53.297154 kubelet[2838]: I0130 13:22:53.297110 2838 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:22:53.297800 kubelet[2838]: I0130 13:22:53.297791 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:22:53.297959 kubelet[2838]: I0130 13:22:53.297948 2838 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:22:53.301113 kubelet[2838]: I0130 13:22:53.301073 2838 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:22:53.301152 kubelet[2838]: I0130 13:22:53.301113 2838 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:22:53.301152 kubelet[2838]: E0130 13:22:53.301124 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-9d6a1ac7ae?timeout=10s\": dial tcp 147.75.90.199:6443: connect: connection refused" interval="200ms" Jan 30 13:22:53.301206 kubelet[2838]: W0130 13:22:53.301175 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.301266 kubelet[2838]: E0130 13:22:53.301209 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.301361 kubelet[2838]: I0130 13:22:53.301352 2838 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:22:53.301408 kubelet[2838]: I0130 13:22:53.301399 2838 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:22:53.301443 kubelet[2838]: I0130 13:22:53.301409 2838 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:22:53.301885 kubelet[2838]: I0130 13:22:53.301874 2838 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:22:53.302673 kubelet[2838]: E0130 13:22:53.302633 2838 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:22:53.305157 kubelet[2838]: E0130 13:22:53.305071 2838 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.199:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-9d6a1ac7ae.181f7b236e88c6bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-9d6a1ac7ae,UID:ci-4186.1.0-a-9d6a1ac7ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-9d6a1ac7ae,},FirstTimestamp:2025-01-30 13:22:53.296830141 +0000 UTC m=+0.252286544,LastTimestamp:2025-01-30 13:22:53.296830141 +0000 UTC m=+0.252286544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-9d6a1ac7ae,}" Jan 30 13:22:53.308911 kubelet[2838]: I0130 13:22:53.308873 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:22:53.309476 kubelet[2838]: I0130 13:22:53.309467 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:22:53.309512 kubelet[2838]: I0130 13:22:53.309486 2838 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:22:53.309512 kubelet[2838]: I0130 13:22:53.309498 2838 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:22:53.309545 kubelet[2838]: E0130 13:22:53.309519 2838 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:22:53.309845 kubelet[2838]: W0130 13:22:53.309788 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.309845 kubelet[2838]: E0130 13:22:53.309842 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:53.311955 kubelet[2838]: I0130 13:22:53.311928 2838 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:22:53.311955 kubelet[2838]: I0130 13:22:53.311935 2838 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:22:53.311955 kubelet[2838]: I0130 13:22:53.311944 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:22:53.313050 kubelet[2838]: I0130 13:22:53.313015 2838 policy_none.go:49] "None policy: Start" Jan 30 13:22:53.313413 kubelet[2838]: I0130 13:22:53.313380 2838 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:22:53.313413 kubelet[2838]: I0130 13:22:53.313392 2838 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:22:53.317471 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:22:53.335343 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:22:53.337394 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:22:53.345038 kubelet[2838]: I0130 13:22:53.345021 2838 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:22:53.345165 kubelet[2838]: I0130 13:22:53.345139 2838 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:22:53.345228 kubelet[2838]: I0130 13:22:53.345220 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:22:53.345912 kubelet[2838]: E0130 13:22:53.345897 2838 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-9d6a1ac7ae\" not found" Jan 30 13:22:53.401631 kubelet[2838]: I0130 13:22:53.401539 2838 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.402271 kubelet[2838]: E0130 13:22:53.402173 2838 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.199:6443/api/v1/nodes\": dial tcp 147.75.90.199:6443: connect: connection refused" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.410510 kubelet[2838]: I0130 13:22:53.410371 2838 topology_manager.go:215] "Topology Admit Handler" podUID="d1032a9694b531020e8316bff9532fd6" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.414163 kubelet[2838]: I0130 13:22:53.414106 2838 topology_manager.go:215] "Topology Admit Handler" podUID="08f0a5f3f64b6c745ef031e39491cffa" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.417695 kubelet[2838]: I0130 13:22:53.417602 2838 topology_manager.go:215] "Topology Admit Handler" podUID="769a404fbc22589da7423fb8bac06ed8" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.433208 systemd[1]: Created slice kubepods-burstable-pod08f0a5f3f64b6c745ef031e39491cffa.slice - libcontainer container kubepods-burstable-pod08f0a5f3f64b6c745ef031e39491cffa.slice. Jan 30 13:22:53.472674 systemd[1]: Created slice kubepods-burstable-podd1032a9694b531020e8316bff9532fd6.slice - libcontainer container kubepods-burstable-podd1032a9694b531020e8316bff9532fd6.slice. Jan 30 13:22:53.481926 systemd[1]: Created slice kubepods-burstable-pod769a404fbc22589da7423fb8bac06ed8.slice - libcontainer container kubepods-burstable-pod769a404fbc22589da7423fb8bac06ed8.slice. Jan 30 13:22:53.502371 kubelet[2838]: E0130 13:22:53.502251 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-9d6a1ac7ae?timeout=10s\": dial tcp 147.75.90.199:6443: connect: connection refused" interval="400ms" Jan 30 13:22:53.601980 kubelet[2838]: I0130 13:22:53.601842 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1032a9694b531020e8316bff9532fd6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"d1032a9694b531020e8316bff9532fd6\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.601980 kubelet[2838]: I0130 13:22:53.601954 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.602383 kubelet[2838]: I0130 13:22:53.602055 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.602383 kubelet[2838]: I0130 13:22:53.602133 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.602383 kubelet[2838]: I0130 13:22:53.602208 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1032a9694b531020e8316bff9532fd6-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"d1032a9694b531020e8316bff9532fd6\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.602383 kubelet[2838]: I0130 13:22:53.602271 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1032a9694b531020e8316bff9532fd6-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"d1032a9694b531020e8316bff9532fd6\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.602383 kubelet[2838]: I0130 13:22:53.602354 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.602921 kubelet[2838]: I0130 13:22:53.602423 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.602921 kubelet[2838]: I0130 13:22:53.602521 2838 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/769a404fbc22589da7423fb8bac06ed8-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"769a404fbc22589da7423fb8bac06ed8\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.606544 kubelet[2838]: I0130 13:22:53.606459 2838 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.607274 kubelet[2838]: E0130 13:22:53.607167 2838 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.199:6443/api/v1/nodes\": dial tcp 147.75.90.199:6443: connect: connection refused" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:53.765819 containerd[1790]: time="2025-01-30T13:22:53.765695857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae,Uid:08f0a5f3f64b6c745ef031e39491cffa,Namespace:kube-system,Attempt:0,}" Jan 30 13:22:53.778144 containerd[1790]: time="2025-01-30T13:22:53.778129712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae,Uid:d1032a9694b531020e8316bff9532fd6,Namespace:kube-system,Attempt:0,}" Jan 30 13:22:53.787426 containerd[1790]: time="2025-01-30T13:22:53.787341781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae,Uid:769a404fbc22589da7423fb8bac06ed8,Namespace:kube-system,Attempt:0,}" Jan 30 13:22:53.904351 kubelet[2838]: E0130 13:22:53.904118 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-9d6a1ac7ae?timeout=10s\": dial tcp 147.75.90.199:6443: connect: connection refused" interval="800ms" Jan 30 13:22:54.009806 kubelet[2838]: I0130 13:22:54.009774 2838 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:54.010155 kubelet[2838]: E0130 13:22:54.010124 2838 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.199:6443/api/v1/nodes\": dial tcp 147.75.90.199:6443: connect: connection refused" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:54.164459 kubelet[2838]: W0130 13:22:54.164396 2838 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:54.164459 kubelet[2838]: E0130 13:22:54.164424 2838 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.199:6443: connect: connection refused Jan 30 13:22:54.235449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902289920.mount: Deactivated successfully. Jan 30 13:22:54.236975 containerd[1790]: time="2025-01-30T13:22:54.236928340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:22:54.237212 containerd[1790]: time="2025-01-30T13:22:54.237162334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:22:54.237862 containerd[1790]: time="2025-01-30T13:22:54.237821036Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:22:54.238900 containerd[1790]: time="2025-01-30T13:22:54.238859170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:22:54.239125 containerd[1790]: time="2025-01-30T13:22:54.239081161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:22:54.239370 containerd[1790]: time="2025-01-30T13:22:54.239334723Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:22:54.239787 containerd[1790]: time="2025-01-30T13:22:54.239744321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:22:54.240209 containerd[1790]: time="2025-01-30T13:22:54.240168909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:22:54.241801 containerd[1790]: time="2025-01-30T13:22:54.241759753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.843491ms" Jan 30 13:22:54.242470 containerd[1790]: time="2025-01-30T13:22:54.242425281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.872697ms" Jan 30 13:22:54.243466 containerd[1790]: time="2025-01-30T13:22:54.243424168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.258884ms" Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351552006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351578609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351575057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351585792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351552032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351578627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351585478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:22:54.351603 containerd[1790]: time="2025-01-30T13:22:54.351602296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:22:54.351820 containerd[1790]: time="2025-01-30T13:22:54.351613431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:22:54.351820 containerd[1790]: time="2025-01-30T13:22:54.351625535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:22:54.351820 containerd[1790]: time="2025-01-30T13:22:54.351624653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:22:54.351820 containerd[1790]: time="2025-01-30T13:22:54.351655874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:22:54.369813 systemd[1]: Started cri-containerd-1a27c0582b04ab12d52ba0d612cff82d0f3e0019c0cac7861e5754debd297aea.scope - libcontainer container 1a27c0582b04ab12d52ba0d612cff82d0f3e0019c0cac7861e5754debd297aea. Jan 30 13:22:54.370476 systemd[1]: Started cri-containerd-30bb097bf606b04185037634cf9d95ec2b4b21df51fc0dc852b4bf3fa1d276e1.scope - libcontainer container 30bb097bf606b04185037634cf9d95ec2b4b21df51fc0dc852b4bf3fa1d276e1. Jan 30 13:22:54.371135 systemd[1]: Started cri-containerd-f2cc332cf56fe500b7a5b4585dfc7abd03b58de3b2347e2d38b91c1d9f922ae0.scope - libcontainer container f2cc332cf56fe500b7a5b4585dfc7abd03b58de3b2347e2d38b91c1d9f922ae0. Jan 30 13:22:54.391696 containerd[1790]: time="2025-01-30T13:22:54.391670997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae,Uid:08f0a5f3f64b6c745ef031e39491cffa,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a27c0582b04ab12d52ba0d612cff82d0f3e0019c0cac7861e5754debd297aea\"" Jan 30 13:22:54.391900 containerd[1790]: time="2025-01-30T13:22:54.391889897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae,Uid:d1032a9694b531020e8316bff9532fd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"30bb097bf606b04185037634cf9d95ec2b4b21df51fc0dc852b4bf3fa1d276e1\"" Jan 30 13:22:54.393461 containerd[1790]: time="2025-01-30T13:22:54.393443855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae,Uid:769a404fbc22589da7423fb8bac06ed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2cc332cf56fe500b7a5b4585dfc7abd03b58de3b2347e2d38b91c1d9f922ae0\"" Jan 30 13:22:54.394251 containerd[1790]: time="2025-01-30T13:22:54.394238909Z" level=info msg="CreateContainer within sandbox \"30bb097bf606b04185037634cf9d95ec2b4b21df51fc0dc852b4bf3fa1d276e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:22:54.394285 containerd[1790]: time="2025-01-30T13:22:54.394258091Z" level=info msg="CreateContainer within sandbox \"f2cc332cf56fe500b7a5b4585dfc7abd03b58de3b2347e2d38b91c1d9f922ae0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:22:54.394324 containerd[1790]: time="2025-01-30T13:22:54.394316056Z" level=info msg="CreateContainer within sandbox \"1a27c0582b04ab12d52ba0d612cff82d0f3e0019c0cac7861e5754debd297aea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:22:54.401510 containerd[1790]: time="2025-01-30T13:22:54.401463289Z" level=info msg="CreateContainer within sandbox \"f2cc332cf56fe500b7a5b4585dfc7abd03b58de3b2347e2d38b91c1d9f922ae0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c5565dc01fb2d9596753e4e163c7166d14cd77119327afe8af9a7d4a116852a5\"" Jan 30 13:22:54.401805 containerd[1790]: time="2025-01-30T13:22:54.401789246Z" level=info msg="StartContainer for \"c5565dc01fb2d9596753e4e163c7166d14cd77119327afe8af9a7d4a116852a5\"" Jan 30 13:22:54.402380 containerd[1790]: time="2025-01-30T13:22:54.402368619Z" level=info msg="CreateContainer within sandbox \"1a27c0582b04ab12d52ba0d612cff82d0f3e0019c0cac7861e5754debd297aea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6801952fa0fdd4f77a3380dc42ed547ba1466c275070ef59f689b1c099fcfa97\"" Jan 30 13:22:54.402531 containerd[1790]: time="2025-01-30T13:22:54.402520885Z" level=info msg="StartContainer for \"6801952fa0fdd4f77a3380dc42ed547ba1466c275070ef59f689b1c099fcfa97\"" Jan 30 13:22:54.402821 containerd[1790]: time="2025-01-30T13:22:54.402808403Z" level=info msg="CreateContainer within sandbox \"30bb097bf606b04185037634cf9d95ec2b4b21df51fc0dc852b4bf3fa1d276e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ef0b3df3e948eab91672f5db7e15883805e602beed96fa21d7873e4a87b0073c\"" Jan 30 13:22:54.402953 containerd[1790]: time="2025-01-30T13:22:54.402942678Z" level=info msg="StartContainer for \"ef0b3df3e948eab91672f5db7e15883805e602beed96fa21d7873e4a87b0073c\"" Jan 30 13:22:54.421814 systemd[1]: Started cri-containerd-6801952fa0fdd4f77a3380dc42ed547ba1466c275070ef59f689b1c099fcfa97.scope - libcontainer container 6801952fa0fdd4f77a3380dc42ed547ba1466c275070ef59f689b1c099fcfa97. Jan 30 13:22:54.422384 systemd[1]: Started cri-containerd-c5565dc01fb2d9596753e4e163c7166d14cd77119327afe8af9a7d4a116852a5.scope - libcontainer container c5565dc01fb2d9596753e4e163c7166d14cd77119327afe8af9a7d4a116852a5. Jan 30 13:22:54.422954 systemd[1]: Started cri-containerd-ef0b3df3e948eab91672f5db7e15883805e602beed96fa21d7873e4a87b0073c.scope - libcontainer container ef0b3df3e948eab91672f5db7e15883805e602beed96fa21d7873e4a87b0073c. Jan 30 13:22:54.445773 containerd[1790]: time="2025-01-30T13:22:54.445751824Z" level=info msg="StartContainer for \"ef0b3df3e948eab91672f5db7e15883805e602beed96fa21d7873e4a87b0073c\" returns successfully" Jan 30 13:22:54.453775 containerd[1790]: time="2025-01-30T13:22:54.453749623Z" level=info msg="StartContainer for \"6801952fa0fdd4f77a3380dc42ed547ba1466c275070ef59f689b1c099fcfa97\" returns successfully" Jan 30 13:22:54.453864 containerd[1790]: time="2025-01-30T13:22:54.453750421Z" level=info msg="StartContainer for \"c5565dc01fb2d9596753e4e163c7166d14cd77119327afe8af9a7d4a116852a5\" returns successfully" Jan 30 13:22:54.812422 kubelet[2838]: I0130 13:22:54.812353 2838 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:55.128344 kubelet[2838]: E0130 13:22:55.126301 2838 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-9d6a1ac7ae\" not found" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:55.229759 kubelet[2838]: I0130 13:22:55.229683 2838 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:55.295601 kubelet[2838]: I0130 13:22:55.295567 2838 apiserver.go:52] "Watching apiserver" Jan 30 13:22:55.301247 kubelet[2838]: I0130 13:22:55.301224 2838 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:22:55.321153 kubelet[2838]: E0130 13:22:55.321113 2838 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:55.321153 kubelet[2838]: E0130 13:22:55.321120 2838 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:55.321345 kubelet[2838]: E0130 13:22:55.321123 2838 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:56.325405 kubelet[2838]: W0130 13:22:56.325338 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:56.325405 kubelet[2838]: W0130 13:22:56.325361 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:56.813217 kubelet[2838]: W0130 13:22:56.812991 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:57.518619 systemd[1]: Reloading requested from client PID 3159 ('systemctl') (unit session-11.scope)... Jan 30 13:22:57.518653 systemd[1]: Reloading... Jan 30 13:22:57.581545 zram_generator::config[3198]: No configuration found. Jan 30 13:22:57.649508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:57.719150 systemd[1]: Reloading finished in 199 ms. Jan 30 13:22:57.743738 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:57.749783 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:22:57.749894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:57.769303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:58.030210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:58.032649 (kubelet)[3262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:22:58.056329 kubelet[3262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:22:58.056329 kubelet[3262]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:22:58.056329 kubelet[3262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:22:58.056721 kubelet[3262]: I0130 13:22:58.056381 3262 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:22:58.059032 kubelet[3262]: I0130 13:22:58.058996 3262 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:22:58.059032 kubelet[3262]: I0130 13:22:58.059008 3262 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:22:58.059168 kubelet[3262]: I0130 13:22:58.059162 3262 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:22:58.059900 kubelet[3262]: I0130 13:22:58.059888 3262 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:22:58.060702 kubelet[3262]: I0130 13:22:58.060663 3262 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:22:58.069840 kubelet[3262]: I0130 13:22:58.069795 3262 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:22:58.069919 kubelet[3262]: I0130 13:22:58.069902 3262 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:22:58.070063 kubelet[3262]: I0130 13:22:58.069921 3262 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-9d6a1ac7ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:22:58.070063 kubelet[3262]: I0130 13:22:58.070042 3262 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:22:58.070063 kubelet[3262]: I0130 13:22:58.070048 3262 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:22:58.070158 kubelet[3262]: I0130 13:22:58.070070 3262 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:22:58.070158 kubelet[3262]: I0130 13:22:58.070117 3262 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:22:58.070158 kubelet[3262]: I0130 13:22:58.070124 3262 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:22:58.070158 kubelet[3262]: I0130 13:22:58.070135 3262 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:22:58.070158 kubelet[3262]: I0130 13:22:58.070144 3262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:22:58.070526 kubelet[3262]: I0130 13:22:58.070513 3262 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:22:58.070632 kubelet[3262]: I0130 13:22:58.070624 3262 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:22:58.070832 kubelet[3262]: I0130 13:22:58.070823 3262 server.go:1264] "Started kubelet" Jan 30 13:22:58.070895 kubelet[3262]: I0130 13:22:58.070864 3262 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:22:58.070925 kubelet[3262]: I0130 13:22:58.070891 3262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:22:58.071048 kubelet[3262]: I0130 13:22:58.071040 3262 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:22:58.071397 kubelet[3262]: I0130 13:22:58.071389 3262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:22:58.071483 kubelet[3262]: I0130 13:22:58.071470 3262 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:22:58.071537 kubelet[3262]: I0130 13:22:58.071492 3262 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:22:58.071609 kubelet[3262]: I0130 13:22:58.071601 3262 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:22:58.071747 kubelet[3262]: I0130 13:22:58.071739 3262 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:22:58.071797 kubelet[3262]: E0130 13:22:58.071784 3262 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:22:58.071823 kubelet[3262]: I0130 13:22:58.071817 3262 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:22:58.071887 kubelet[3262]: I0130 13:22:58.071876 3262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:22:58.073361 kubelet[3262]: I0130 13:22:58.072695 3262 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:22:58.077261 kubelet[3262]: I0130 13:22:58.077236 3262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:22:58.077765 kubelet[3262]: I0130 13:22:58.077756 3262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:22:58.077810 kubelet[3262]: I0130 13:22:58.077772 3262 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:22:58.077810 kubelet[3262]: I0130 13:22:58.077782 3262 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:22:58.077884 kubelet[3262]: E0130 13:22:58.077806 3262 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:22:58.087756 kubelet[3262]: I0130 13:22:58.087739 3262 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:22:58.087756 kubelet[3262]: I0130 13:22:58.087753 3262 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:22:58.087849 kubelet[3262]: I0130 13:22:58.087768 3262 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:22:58.087876 kubelet[3262]: I0130 13:22:58.087869 3262 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:22:58.087898 kubelet[3262]: I0130 13:22:58.087877 3262 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:22:58.087898 kubelet[3262]: I0130 13:22:58.087888 3262 policy_none.go:49] "None policy: Start" Jan 30 13:22:58.088176 kubelet[3262]: I0130 13:22:58.088166 3262 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:22:58.088176 kubelet[3262]: I0130 13:22:58.088178 3262 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:22:58.088273 kubelet[3262]: I0130 13:22:58.088264 3262 state_mem.go:75] "Updated machine memory state" Jan 30 13:22:58.090205 kubelet[3262]: I0130 13:22:58.090195 3262 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:22:58.090308 kubelet[3262]: I0130 13:22:58.090289 3262 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:22:58.090368 kubelet[3262]: I0130 13:22:58.090360 3262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:22:58.179025 kubelet[3262]: I0130 13:22:58.178861 3262 topology_manager.go:215] "Topology Admit Handler" podUID="d1032a9694b531020e8316bff9532fd6" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.179270 kubelet[3262]: I0130 13:22:58.179118 3262 topology_manager.go:215] "Topology Admit Handler" podUID="08f0a5f3f64b6c745ef031e39491cffa" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.179423 kubelet[3262]: I0130 13:22:58.179345 3262 topology_manager.go:215] "Topology Admit Handler" podUID="769a404fbc22589da7423fb8bac06ed8" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.179878 kubelet[3262]: I0130 13:22:58.179811 3262 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.188414 kubelet[3262]: W0130 13:22:58.188345 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:58.188732 kubelet[3262]: W0130 13:22:58.188440 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:58.188732 kubelet[3262]: W0130 13:22:58.188460 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:58.188732 kubelet[3262]: E0130 13:22:58.188544 3262 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.188732 kubelet[3262]: E0130 13:22:58.188598 3262 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.188732 kubelet[3262]: E0130 13:22:58.188652 3262 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.189377 kubelet[3262]: I0130 13:22:58.189334 3262 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.189613 kubelet[3262]: I0130 13:22:58.189499 3262 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.272962 kubelet[3262]: I0130 13:22:58.272854 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/769a404fbc22589da7423fb8bac06ed8-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"769a404fbc22589da7423fb8bac06ed8\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.272962 kubelet[3262]: I0130 13:22:58.272957 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1032a9694b531020e8316bff9532fd6-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"d1032a9694b531020e8316bff9532fd6\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.273417 kubelet[3262]: I0130 13:22:58.273018 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1032a9694b531020e8316bff9532fd6-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"d1032a9694b531020e8316bff9532fd6\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.273417 kubelet[3262]: I0130 13:22:58.273068 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.273417 kubelet[3262]: I0130 13:22:58.273119 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.273417 kubelet[3262]: I0130 13:22:58.273166 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.273417 kubelet[3262]: I0130 13:22:58.273277 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.273998 kubelet[3262]: I0130 13:22:58.273425 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1032a9694b531020e8316bff9532fd6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"d1032a9694b531020e8316bff9532fd6\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:58.273998 kubelet[3262]: I0130 13:22:58.273547 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08f0a5f3f64b6c745ef031e39491cffa-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" (UID: \"08f0a5f3f64b6c745ef031e39491cffa\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:59.070709 kubelet[3262]: I0130 13:22:59.070655 3262 apiserver.go:52] "Watching apiserver" Jan 30 13:22:59.092011 kubelet[3262]: W0130 13:22:59.091721 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:59.092011 kubelet[3262]: E0130 13:22:59.091992 3262 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:59.092362 kubelet[3262]: W0130 13:22:59.092327 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:22:59.092391 kubelet[3262]: E0130 13:22:59.092361 3262 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:22:59.124337 kubelet[3262]: I0130 13:22:59.124299 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-9d6a1ac7ae" podStartSLOduration=3.124285903 podStartE2EDuration="3.124285903s" podCreationTimestamp="2025-01-30 13:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:22:59.119329781 +0000 UTC m=+1.084750314" watchObservedRunningTime="2025-01-30 13:22:59.124285903 +0000 UTC m=+1.089706439" Jan 30 13:22:59.124463 kubelet[3262]: I0130 13:22:59.124374 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-9d6a1ac7ae" podStartSLOduration=3.124368971 podStartE2EDuration="3.124368971s" podCreationTimestamp="2025-01-30 13:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:22:59.12435913 +0000 UTC m=+1.089779666" watchObservedRunningTime="2025-01-30 13:22:59.124368971 +0000 UTC m=+1.089789505" Jan 30 13:22:59.129680 kubelet[3262]: I0130 13:22:59.129647 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-9d6a1ac7ae" podStartSLOduration=3.129637044 podStartE2EDuration="3.129637044s" podCreationTimestamp="2025-01-30 13:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:22:59.129360637 +0000 UTC m=+1.094781182" watchObservedRunningTime="2025-01-30 13:22:59.129637044 +0000 UTC m=+1.095057577" Jan 30 13:22:59.172229 kubelet[3262]: I0130 13:22:59.172191 3262 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:23:02.340796 sudo[2063]: pam_unix(sudo:session): session closed for user root Jan 30 13:23:02.341655 sshd[2062]: Connection closed by 139.178.89.65 port 35936 Jan 30 13:23:02.341860 sshd-session[2060]: pam_unix(sshd:session): session closed for user core Jan 30 13:23:02.343599 systemd[1]: sshd@8-147.75.90.199:22-139.178.89.65:35936.service: Deactivated successfully. Jan 30 13:23:02.344668 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:23:02.344776 systemd[1]: session-11.scope: Consumed 3.543s CPU time, 200.6M memory peak, 0B memory swap peak. Jan 30 13:23:02.345476 systemd-logind[1772]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:23:02.346307 systemd-logind[1772]: Removed session 11. Jan 30 13:23:10.531317 kubelet[3262]: I0130 13:23:10.531296 3262 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:23:10.531655 kubelet[3262]: I0130 13:23:10.531639 3262 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:23:10.531683 containerd[1790]: time="2025-01-30T13:23:10.531532227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:23:10.857365 kubelet[3262]: I0130 13:23:10.857279 3262 topology_manager.go:215] "Topology Admit Handler" podUID="97a6fac3-ae40-4ec3-9e80-a756b2ce287e" podNamespace="kube-system" podName="kube-proxy-lmpb6" Jan 30 13:23:10.860528 systemd[1]: Created slice kubepods-besteffort-pod97a6fac3_ae40_4ec3_9e80_a756b2ce287e.slice - libcontainer container kubepods-besteffort-pod97a6fac3_ae40_4ec3_9e80_a756b2ce287e.slice. Jan 30 13:23:10.870768 kubelet[3262]: I0130 13:23:10.870750 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97a6fac3-ae40-4ec3-9e80-a756b2ce287e-kube-proxy\") pod \"kube-proxy-lmpb6\" (UID: \"97a6fac3-ae40-4ec3-9e80-a756b2ce287e\") " pod="kube-system/kube-proxy-lmpb6" Jan 30 13:23:10.870836 kubelet[3262]: I0130 13:23:10.870771 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97a6fac3-ae40-4ec3-9e80-a756b2ce287e-xtables-lock\") pod \"kube-proxy-lmpb6\" (UID: \"97a6fac3-ae40-4ec3-9e80-a756b2ce287e\") " pod="kube-system/kube-proxy-lmpb6" Jan 30 13:23:10.870836 kubelet[3262]: I0130 13:23:10.870782 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97a6fac3-ae40-4ec3-9e80-a756b2ce287e-lib-modules\") pod \"kube-proxy-lmpb6\" (UID: \"97a6fac3-ae40-4ec3-9e80-a756b2ce287e\") " pod="kube-system/kube-proxy-lmpb6" Jan 30 13:23:10.870836 kubelet[3262]: I0130 13:23:10.870794 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6tp\" (UniqueName: \"kubernetes.io/projected/97a6fac3-ae40-4ec3-9e80-a756b2ce287e-kube-api-access-9m6tp\") pod \"kube-proxy-lmpb6\" (UID: \"97a6fac3-ae40-4ec3-9e80-a756b2ce287e\") " pod="kube-system/kube-proxy-lmpb6" Jan 30 13:23:10.984197 kubelet[3262]: E0130 13:23:10.984089 3262 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 13:23:10.984197 kubelet[3262]: E0130 13:23:10.984154 3262 projected.go:200] Error preparing data for projected volume kube-api-access-9m6tp for pod kube-system/kube-proxy-lmpb6: configmap "kube-root-ca.crt" not found Jan 30 13:23:10.984608 kubelet[3262]: E0130 13:23:10.984284 3262 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97a6fac3-ae40-4ec3-9e80-a756b2ce287e-kube-api-access-9m6tp podName:97a6fac3-ae40-4ec3-9e80-a756b2ce287e nodeName:}" failed. No retries permitted until 2025-01-30 13:23:11.48423718 +0000 UTC m=+13.449657779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9m6tp" (UniqueName: "kubernetes.io/projected/97a6fac3-ae40-4ec3-9e80-a756b2ce287e-kube-api-access-9m6tp") pod "kube-proxy-lmpb6" (UID: "97a6fac3-ae40-4ec3-9e80-a756b2ce287e") : configmap "kube-root-ca.crt" not found Jan 30 13:23:11.094459 update_engine[1777]: I20250130 13:23:11.094323 1777 update_attempter.cc:509] Updating boot flags... Jan 30 13:23:11.136492 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (3437) Jan 30 13:23:11.163487 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (3440) Jan 30 13:23:11.737804 kubelet[3262]: I0130 13:23:11.737711 3262 topology_manager.go:215] "Topology Admit Handler" podUID="7529bfd4-5213-470f-8c3a-eb4a91fdd7ef" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-czz48" Jan 30 13:23:11.750241 systemd[1]: Created slice kubepods-besteffort-pod7529bfd4_5213_470f_8c3a_eb4a91fdd7ef.slice - libcontainer container kubepods-besteffort-pod7529bfd4_5213_470f_8c3a_eb4a91fdd7ef.slice. Jan 30 13:23:11.773650 containerd[1790]: time="2025-01-30T13:23:11.773558015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmpb6,Uid:97a6fac3-ae40-4ec3-9e80-a756b2ce287e,Namespace:kube-system,Attempt:0,}" Jan 30 13:23:11.776728 kubelet[3262]: I0130 13:23:11.776686 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7529bfd4-5213-470f-8c3a-eb4a91fdd7ef-var-lib-calico\") pod \"tigera-operator-7bc55997bb-czz48\" (UID: \"7529bfd4-5213-470f-8c3a-eb4a91fdd7ef\") " pod="tigera-operator/tigera-operator-7bc55997bb-czz48" Jan 30 13:23:11.776728 kubelet[3262]: I0130 13:23:11.776705 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6xh\" (UniqueName: \"kubernetes.io/projected/7529bfd4-5213-470f-8c3a-eb4a91fdd7ef-kube-api-access-mt6xh\") pod \"tigera-operator-7bc55997bb-czz48\" (UID: \"7529bfd4-5213-470f-8c3a-eb4a91fdd7ef\") " pod="tigera-operator/tigera-operator-7bc55997bb-czz48" Jan 30 13:23:11.784128 containerd[1790]: time="2025-01-30T13:23:11.783856927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:11.784128 containerd[1790]: time="2025-01-30T13:23:11.784085166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:11.784128 containerd[1790]: time="2025-01-30T13:23:11.784093165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:11.784230 containerd[1790]: time="2025-01-30T13:23:11.784135051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:11.811776 systemd[1]: Started cri-containerd-caa11425129991112954e5cde53a0a28d6ab702a7e25e86c843c528c4c47dce9.scope - libcontainer container caa11425129991112954e5cde53a0a28d6ab702a7e25e86c843c528c4c47dce9. Jan 30 13:23:11.824451 containerd[1790]: time="2025-01-30T13:23:11.824423695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmpb6,Uid:97a6fac3-ae40-4ec3-9e80-a756b2ce287e,Namespace:kube-system,Attempt:0,} returns sandbox id \"caa11425129991112954e5cde53a0a28d6ab702a7e25e86c843c528c4c47dce9\"" Jan 30 13:23:11.826096 containerd[1790]: time="2025-01-30T13:23:11.826078860Z" level=info msg="CreateContainer within sandbox \"caa11425129991112954e5cde53a0a28d6ab702a7e25e86c843c528c4c47dce9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:23:11.831951 containerd[1790]: time="2025-01-30T13:23:11.831908372Z" level=info msg="CreateContainer within sandbox \"caa11425129991112954e5cde53a0a28d6ab702a7e25e86c843c528c4c47dce9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7fde73b02e98c9c24e5124dda4e1ab8eaf0180c97538646c2298c98060ea35cd\"" Jan 30 13:23:11.832172 containerd[1790]: time="2025-01-30T13:23:11.832136957Z" level=info msg="StartContainer for \"7fde73b02e98c9c24e5124dda4e1ab8eaf0180c97538646c2298c98060ea35cd\"" Jan 30 13:23:11.864679 systemd[1]: Started cri-containerd-7fde73b02e98c9c24e5124dda4e1ab8eaf0180c97538646c2298c98060ea35cd.scope - libcontainer container 7fde73b02e98c9c24e5124dda4e1ab8eaf0180c97538646c2298c98060ea35cd. Jan 30 13:23:11.886012 containerd[1790]: time="2025-01-30T13:23:11.885977814Z" level=info msg="StartContainer for \"7fde73b02e98c9c24e5124dda4e1ab8eaf0180c97538646c2298c98060ea35cd\" returns successfully" Jan 30 13:23:12.055828 containerd[1790]: time="2025-01-30T13:23:12.055635842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-czz48,Uid:7529bfd4-5213-470f-8c3a-eb4a91fdd7ef,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:23:12.066673 containerd[1790]: time="2025-01-30T13:23:12.066635406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:12.066673 containerd[1790]: time="2025-01-30T13:23:12.066664021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:12.066673 containerd[1790]: time="2025-01-30T13:23:12.066670740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:12.066880 containerd[1790]: time="2025-01-30T13:23:12.066733529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:12.090610 systemd[1]: Started cri-containerd-0980a716ee033395c6170e3072b995f3db3f1bac22d0fc893918debdb22f3989.scope - libcontainer container 0980a716ee033395c6170e3072b995f3db3f1bac22d0fc893918debdb22f3989. Jan 30 13:23:12.116900 containerd[1790]: time="2025-01-30T13:23:12.116870488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-czz48,Uid:7529bfd4-5213-470f-8c3a-eb4a91fdd7ef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0980a716ee033395c6170e3072b995f3db3f1bac22d0fc893918debdb22f3989\"" Jan 30 13:23:12.117120 kubelet[3262]: I0130 13:23:12.117088 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lmpb6" podStartSLOduration=2.117073271 podStartE2EDuration="2.117073271s" podCreationTimestamp="2025-01-30 13:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:23:12.11704408 +0000 UTC m=+14.082464625" watchObservedRunningTime="2025-01-30 13:23:12.117073271 +0000 UTC m=+14.082493812" Jan 30 13:23:12.117886 containerd[1790]: time="2025-01-30T13:23:12.117867073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:23:14.674677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177623798.mount: Deactivated successfully. Jan 30 13:23:14.883314 containerd[1790]: time="2025-01-30T13:23:14.883261789Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:14.883526 containerd[1790]: time="2025-01-30T13:23:14.883459187Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:23:14.883785 containerd[1790]: time="2025-01-30T13:23:14.883745169Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:14.885220 containerd[1790]: time="2025-01-30T13:23:14.885179857Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:14.885520 containerd[1790]: time="2025-01-30T13:23:14.885488631Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.767596823s" Jan 30 13:23:14.885520 containerd[1790]: time="2025-01-30T13:23:14.885508743Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:23:14.886516 containerd[1790]: time="2025-01-30T13:23:14.886495025Z" level=info msg="CreateContainer within sandbox \"0980a716ee033395c6170e3072b995f3db3f1bac22d0fc893918debdb22f3989\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:23:14.890218 containerd[1790]: time="2025-01-30T13:23:14.890176047Z" level=info msg="CreateContainer within sandbox \"0980a716ee033395c6170e3072b995f3db3f1bac22d0fc893918debdb22f3989\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"546cf9ed64a3770c4c8a4fcb9ba73a022b1a5b01a4248f0c802269650074e2f3\"" Jan 30 13:23:14.890450 containerd[1790]: time="2025-01-30T13:23:14.890411124Z" level=info msg="StartContainer for \"546cf9ed64a3770c4c8a4fcb9ba73a022b1a5b01a4248f0c802269650074e2f3\"" Jan 30 13:23:14.914788 systemd[1]: Started cri-containerd-546cf9ed64a3770c4c8a4fcb9ba73a022b1a5b01a4248f0c802269650074e2f3.scope - libcontainer container 546cf9ed64a3770c4c8a4fcb9ba73a022b1a5b01a4248f0c802269650074e2f3. Jan 30 13:23:14.925753 containerd[1790]: time="2025-01-30T13:23:14.925701099Z" level=info msg="StartContainer for \"546cf9ed64a3770c4c8a4fcb9ba73a022b1a5b01a4248f0c802269650074e2f3\" returns successfully" Jan 30 13:23:15.126605 kubelet[3262]: I0130 13:23:15.126535 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-czz48" podStartSLOduration=1.358239189 podStartE2EDuration="4.126519706s" podCreationTimestamp="2025-01-30 13:23:11 +0000 UTC" firstStartedPulling="2025-01-30 13:23:12.117590695 +0000 UTC m=+14.083011229" lastFinishedPulling="2025-01-30 13:23:14.885871216 +0000 UTC m=+16.851291746" observedRunningTime="2025-01-30 13:23:15.126491755 +0000 UTC m=+17.091912295" watchObservedRunningTime="2025-01-30 13:23:15.126519706 +0000 UTC m=+17.091940246" Jan 30 13:23:17.793999 kubelet[3262]: I0130 13:23:17.793913 3262 topology_manager.go:215] "Topology Admit Handler" podUID="6e308052-d68e-468f-801d-19109b269097" podNamespace="calico-system" podName="calico-typha-794969564d-tkq69" Jan 30 13:23:17.812234 systemd[1]: Created slice kubepods-besteffort-pod6e308052_d68e_468f_801d_19109b269097.slice - libcontainer container kubepods-besteffort-pod6e308052_d68e_468f_801d_19109b269097.slice. Jan 30 13:23:17.818893 kubelet[3262]: I0130 13:23:17.818852 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e308052-d68e-468f-801d-19109b269097-tigera-ca-bundle\") pod \"calico-typha-794969564d-tkq69\" (UID: \"6e308052-d68e-468f-801d-19109b269097\") " pod="calico-system/calico-typha-794969564d-tkq69" Jan 30 13:23:17.819048 kubelet[3262]: I0130 13:23:17.818913 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvh8\" (UniqueName: \"kubernetes.io/projected/6e308052-d68e-468f-801d-19109b269097-kube-api-access-9vvh8\") pod \"calico-typha-794969564d-tkq69\" (UID: \"6e308052-d68e-468f-801d-19109b269097\") " pod="calico-system/calico-typha-794969564d-tkq69" Jan 30 13:23:17.819048 kubelet[3262]: I0130 13:23:17.818953 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6e308052-d68e-468f-801d-19109b269097-typha-certs\") pod \"calico-typha-794969564d-tkq69\" (UID: \"6e308052-d68e-468f-801d-19109b269097\") " pod="calico-system/calico-typha-794969564d-tkq69" Jan 30 13:23:17.829129 kubelet[3262]: I0130 13:23:17.829095 3262 topology_manager.go:215] "Topology Admit Handler" podUID="3c36436a-e640-4821-84ec-c91fbf10d143" podNamespace="calico-system" podName="calico-node-j2qfm" Jan 30 13:23:17.834626 systemd[1]: Created slice kubepods-besteffort-pod3c36436a_e640_4821_84ec_c91fbf10d143.slice - libcontainer container kubepods-besteffort-pod3c36436a_e640_4821_84ec_c91fbf10d143.slice. Jan 30 13:23:17.919177 kubelet[3262]: I0130 13:23:17.919106 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-xtables-lock\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919177 kubelet[3262]: I0130 13:23:17.919145 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm9hk\" (UniqueName: \"kubernetes.io/projected/3c36436a-e640-4821-84ec-c91fbf10d143-kube-api-access-gm9hk\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919177 kubelet[3262]: I0130 13:23:17.919185 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-lib-modules\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919375 kubelet[3262]: I0130 13:23:17.919205 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-var-run-calico\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919375 kubelet[3262]: I0130 13:23:17.919233 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-cni-net-dir\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919375 kubelet[3262]: I0130 13:23:17.919253 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-cni-log-dir\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919375 kubelet[3262]: I0130 13:23:17.919270 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-flexvol-driver-host\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919375 kubelet[3262]: I0130 13:23:17.919329 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c36436a-e640-4821-84ec-c91fbf10d143-node-certs\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919576 kubelet[3262]: I0130 13:23:17.919378 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-cni-bin-dir\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919576 kubelet[3262]: I0130 13:23:17.919459 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-var-lib-calico\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919576 kubelet[3262]: I0130 13:23:17.919534 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c36436a-e640-4821-84ec-c91fbf10d143-tigera-ca-bundle\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.919742 kubelet[3262]: I0130 13:23:17.919577 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c36436a-e640-4821-84ec-c91fbf10d143-policysync\") pod \"calico-node-j2qfm\" (UID: \"3c36436a-e640-4821-84ec-c91fbf10d143\") " pod="calico-system/calico-node-j2qfm" Jan 30 13:23:17.962584 kubelet[3262]: I0130 13:23:17.962540 3262 topology_manager.go:215] "Topology Admit Handler" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" podNamespace="calico-system" podName="csi-node-driver-fdxpb" Jan 30 13:23:17.962950 kubelet[3262]: E0130 13:23:17.962925 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:18.020897 kubelet[3262]: I0130 13:23:18.020790 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dhtf\" (UniqueName: \"kubernetes.io/projected/b216c652-2303-4f3f-bb63-04ccb5f59378-kube-api-access-6dhtf\") pod \"csi-node-driver-fdxpb\" (UID: \"b216c652-2303-4f3f-bb63-04ccb5f59378\") " pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:18.021168 kubelet[3262]: I0130 13:23:18.020938 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b216c652-2303-4f3f-bb63-04ccb5f59378-kubelet-dir\") pod \"csi-node-driver-fdxpb\" (UID: \"b216c652-2303-4f3f-bb63-04ccb5f59378\") " pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:18.021287 kubelet[3262]: I0130 13:23:18.021189 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b216c652-2303-4f3f-bb63-04ccb5f59378-socket-dir\") pod \"csi-node-driver-fdxpb\" (UID: \"b216c652-2303-4f3f-bb63-04ccb5f59378\") " pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:18.021417 kubelet[3262]: I0130 13:23:18.021306 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b216c652-2303-4f3f-bb63-04ccb5f59378-varrun\") pod \"csi-node-driver-fdxpb\" (UID: \"b216c652-2303-4f3f-bb63-04ccb5f59378\") " pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:18.021417 kubelet[3262]: I0130 13:23:18.021396 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b216c652-2303-4f3f-bb63-04ccb5f59378-registration-dir\") pod \"csi-node-driver-fdxpb\" (UID: \"b216c652-2303-4f3f-bb63-04ccb5f59378\") " pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:18.023792 kubelet[3262]: E0130 13:23:18.023702 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.023792 kubelet[3262]: W0130 13:23:18.023742 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.023792 kubelet[3262]: E0130 13:23:18.023780 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.028744 kubelet[3262]: E0130 13:23:18.028674 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.028744 kubelet[3262]: W0130 13:23:18.028718 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.028744 kubelet[3262]: E0130 13:23:18.028755 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.040052 kubelet[3262]: E0130 13:23:18.039993 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.040052 kubelet[3262]: W0130 13:23:18.040030 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.040496 kubelet[3262]: E0130 13:23:18.040065 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.116619 containerd[1790]: time="2025-01-30T13:23:18.116584818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794969564d-tkq69,Uid:6e308052-d68e-468f-801d-19109b269097,Namespace:calico-system,Attempt:0,}" Jan 30 13:23:18.122514 kubelet[3262]: E0130 13:23:18.122490 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.122514 kubelet[3262]: W0130 13:23:18.122500 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.122514 kubelet[3262]: E0130 13:23:18.122512 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.122675 kubelet[3262]: E0130 13:23:18.122627 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.122675 kubelet[3262]: W0130 13:23:18.122635 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.122675 kubelet[3262]: E0130 13:23:18.122644 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.122762 kubelet[3262]: E0130 13:23:18.122751 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.122762 kubelet[3262]: W0130 13:23:18.122757 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.122802 kubelet[3262]: E0130 13:23:18.122763 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.122925 kubelet[3262]: E0130 13:23:18.122893 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.122925 kubelet[3262]: W0130 13:23:18.122898 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.122925 kubelet[3262]: E0130 13:23:18.122905 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123034 kubelet[3262]: E0130 13:23:18.123000 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123034 kubelet[3262]: W0130 13:23:18.123007 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123034 kubelet[3262]: E0130 13:23:18.123013 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123118 kubelet[3262]: E0130 13:23:18.123113 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123136 kubelet[3262]: W0130 13:23:18.123118 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123136 kubelet[3262]: E0130 13:23:18.123124 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123205 kubelet[3262]: E0130 13:23:18.123199 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123224 kubelet[3262]: W0130 13:23:18.123205 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123224 kubelet[3262]: E0130 13:23:18.123212 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123290 kubelet[3262]: E0130 13:23:18.123285 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123309 kubelet[3262]: W0130 13:23:18.123291 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123309 kubelet[3262]: E0130 13:23:18.123298 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123422 kubelet[3262]: E0130 13:23:18.123416 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123422 kubelet[3262]: W0130 13:23:18.123422 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123459 kubelet[3262]: E0130 13:23:18.123427 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123531 kubelet[3262]: E0130 13:23:18.123522 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123531 kubelet[3262]: W0130 13:23:18.123528 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123609 kubelet[3262]: E0130 13:23:18.123536 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123645 kubelet[3262]: E0130 13:23:18.123630 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123645 kubelet[3262]: W0130 13:23:18.123634 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123645 kubelet[3262]: E0130 13:23:18.123640 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123731 kubelet[3262]: E0130 13:23:18.123724 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123731 kubelet[3262]: W0130 13:23:18.123729 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123769 kubelet[3262]: E0130 13:23:18.123735 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123830 kubelet[3262]: E0130 13:23:18.123824 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123830 kubelet[3262]: W0130 13:23:18.123829 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123867 kubelet[3262]: E0130 13:23:18.123841 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.123939 kubelet[3262]: E0130 13:23:18.123908 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.123939 kubelet[3262]: W0130 13:23:18.123914 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.123939 kubelet[3262]: E0130 13:23:18.123930 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124006 kubelet[3262]: E0130 13:23:18.124000 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124033 kubelet[3262]: W0130 13:23:18.124006 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124033 kubelet[3262]: E0130 13:23:18.124018 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124097 kubelet[3262]: E0130 13:23:18.124092 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124097 kubelet[3262]: W0130 13:23:18.124096 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124131 kubelet[3262]: E0130 13:23:18.124109 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124177 kubelet[3262]: E0130 13:23:18.124172 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124196 kubelet[3262]: W0130 13:23:18.124177 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124196 kubelet[3262]: E0130 13:23:18.124184 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124260 kubelet[3262]: E0130 13:23:18.124256 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124279 kubelet[3262]: W0130 13:23:18.124260 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124279 kubelet[3262]: E0130 13:23:18.124266 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124370 kubelet[3262]: E0130 13:23:18.124365 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124370 kubelet[3262]: W0130 13:23:18.124369 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124410 kubelet[3262]: E0130 13:23:18.124376 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124456 kubelet[3262]: E0130 13:23:18.124451 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124456 kubelet[3262]: W0130 13:23:18.124455 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124498 kubelet[3262]: E0130 13:23:18.124461 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124546 kubelet[3262]: E0130 13:23:18.124541 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124546 kubelet[3262]: W0130 13:23:18.124545 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124579 kubelet[3262]: E0130 13:23:18.124551 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124704 kubelet[3262]: E0130 13:23:18.124699 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124721 kubelet[3262]: W0130 13:23:18.124704 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124721 kubelet[3262]: E0130 13:23:18.124710 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124787 kubelet[3262]: E0130 13:23:18.124782 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124787 kubelet[3262]: W0130 13:23:18.124787 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124823 kubelet[3262]: E0130 13:23:18.124791 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124859 kubelet[3262]: E0130 13:23:18.124855 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124879 kubelet[3262]: W0130 13:23:18.124859 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124879 kubelet[3262]: E0130 13:23:18.124863 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.124944 kubelet[3262]: E0130 13:23:18.124939 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.124965 kubelet[3262]: W0130 13:23:18.124944 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.124965 kubelet[3262]: E0130 13:23:18.124948 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.126594 containerd[1790]: time="2025-01-30T13:23:18.126525017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:18.126594 containerd[1790]: time="2025-01-30T13:23:18.126553411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:18.126594 containerd[1790]: time="2025-01-30T13:23:18.126560462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:18.126654 containerd[1790]: time="2025-01-30T13:23:18.126602555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:18.128399 kubelet[3262]: E0130 13:23:18.128389 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:23:18.128399 kubelet[3262]: W0130 13:23:18.128398 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:23:18.128448 kubelet[3262]: E0130 13:23:18.128407 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:23:18.136718 containerd[1790]: time="2025-01-30T13:23:18.136663028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2qfm,Uid:3c36436a-e640-4821-84ec-c91fbf10d143,Namespace:calico-system,Attempt:0,}" Jan 30 13:23:18.143597 systemd[1]: Started cri-containerd-c71bdc6329a8ea34ac2e97a4dd249966e3cfc2233c6a65db7b866167669630d6.scope - libcontainer container c71bdc6329a8ea34ac2e97a4dd249966e3cfc2233c6a65db7b866167669630d6. Jan 30 13:23:18.146095 containerd[1790]: time="2025-01-30T13:23:18.146023229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:18.146095 containerd[1790]: time="2025-01-30T13:23:18.146053067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:18.146261 containerd[1790]: time="2025-01-30T13:23:18.146075723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:18.146304 containerd[1790]: time="2025-01-30T13:23:18.146294090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:18.152123 systemd[1]: Started cri-containerd-55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9.scope - libcontainer container 55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9. Jan 30 13:23:18.161918 containerd[1790]: time="2025-01-30T13:23:18.161894633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2qfm,Uid:3c36436a-e640-4821-84ec-c91fbf10d143,Namespace:calico-system,Attempt:0,} returns sandbox id \"55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9\"" Jan 30 13:23:18.163446 containerd[1790]: time="2025-01-30T13:23:18.163430837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:23:18.166010 containerd[1790]: time="2025-01-30T13:23:18.165990970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794969564d-tkq69,Uid:6e308052-d68e-468f-801d-19109b269097,Namespace:calico-system,Attempt:0,} returns sandbox id \"c71bdc6329a8ea34ac2e97a4dd249966e3cfc2233c6a65db7b866167669630d6\"" Jan 30 13:23:19.476857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418879162.mount: Deactivated successfully. Jan 30 13:23:19.521464 containerd[1790]: time="2025-01-30T13:23:19.521438035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:19.521708 containerd[1790]: time="2025-01-30T13:23:19.521689525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:23:19.521999 containerd[1790]: time="2025-01-30T13:23:19.521989120Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:19.523029 containerd[1790]: time="2025-01-30T13:23:19.522990210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:19.523345 containerd[1790]: time="2025-01-30T13:23:19.523333388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.359885205s" Jan 30 13:23:19.523365 containerd[1790]: time="2025-01-30T13:23:19.523348778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:23:19.523963 containerd[1790]: time="2025-01-30T13:23:19.523929602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:23:19.524600 containerd[1790]: time="2025-01-30T13:23:19.524587420Z" level=info msg="CreateContainer within sandbox \"55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:23:19.530573 containerd[1790]: time="2025-01-30T13:23:19.530554513Z" level=info msg="CreateContainer within sandbox \"55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23\"" Jan 30 13:23:19.530891 containerd[1790]: time="2025-01-30T13:23:19.530861211Z" level=info msg="StartContainer for \"b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23\"" Jan 30 13:23:19.553983 systemd[1]: Started cri-containerd-b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23.scope - libcontainer container b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23. Jan 30 13:23:19.606094 containerd[1790]: time="2025-01-30T13:23:19.606054816Z" level=info msg="StartContainer for \"b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23\" returns successfully" Jan 30 13:23:19.617033 systemd[1]: cri-containerd-b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23.scope: Deactivated successfully. Jan 30 13:23:19.854095 containerd[1790]: time="2025-01-30T13:23:19.854051215Z" level=info msg="shim disconnected" id=b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23 namespace=k8s.io Jan 30 13:23:19.854095 containerd[1790]: time="2025-01-30T13:23:19.854095498Z" level=warning msg="cleaning up after shim disconnected" id=b9def3e2e0809c79f18227fb39ab5ca2adeca46e170e956bfe8062a72406fb23 namespace=k8s.io Jan 30 13:23:19.854213 containerd[1790]: time="2025-01-30T13:23:19.854102865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:23:20.079142 kubelet[3262]: E0130 13:23:20.079048 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:21.037418 containerd[1790]: time="2025-01-30T13:23:21.037363175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:21.037642 containerd[1790]: time="2025-01-30T13:23:21.037593013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:23:21.038014 containerd[1790]: time="2025-01-30T13:23:21.037973741Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:21.038864 containerd[1790]: time="2025-01-30T13:23:21.038825701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:21.039249 containerd[1790]: time="2025-01-30T13:23:21.039207178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.515263021s" Jan 30 13:23:21.039249 containerd[1790]: time="2025-01-30T13:23:21.039222471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:23:21.039679 containerd[1790]: time="2025-01-30T13:23:21.039642093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:23:21.042744 containerd[1790]: time="2025-01-30T13:23:21.042726507Z" level=info msg="CreateContainer within sandbox \"c71bdc6329a8ea34ac2e97a4dd249966e3cfc2233c6a65db7b866167669630d6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:23:21.047032 containerd[1790]: time="2025-01-30T13:23:21.047016826Z" level=info msg="CreateContainer within sandbox \"c71bdc6329a8ea34ac2e97a4dd249966e3cfc2233c6a65db7b866167669630d6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d4dd0c0a2a8cb473a9ec2195af4196f90653a08405079ebf8bad521a99219ce\"" Jan 30 13:23:21.047231 containerd[1790]: time="2025-01-30T13:23:21.047218462Z" level=info msg="StartContainer for \"7d4dd0c0a2a8cb473a9ec2195af4196f90653a08405079ebf8bad521a99219ce\"" Jan 30 13:23:21.077748 systemd[1]: Started cri-containerd-7d4dd0c0a2a8cb473a9ec2195af4196f90653a08405079ebf8bad521a99219ce.scope - libcontainer container 7d4dd0c0a2a8cb473a9ec2195af4196f90653a08405079ebf8bad521a99219ce. Jan 30 13:23:21.108281 containerd[1790]: time="2025-01-30T13:23:21.108254807Z" level=info msg="StartContainer for \"7d4dd0c0a2a8cb473a9ec2195af4196f90653a08405079ebf8bad521a99219ce\" returns successfully" Jan 30 13:23:22.078562 kubelet[3262]: E0130 13:23:22.078439 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:22.132649 kubelet[3262]: I0130 13:23:22.132630 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:23:23.368443 containerd[1790]: time="2025-01-30T13:23:23.368419733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:23.368668 containerd[1790]: time="2025-01-30T13:23:23.368641548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:23:23.368992 containerd[1790]: time="2025-01-30T13:23:23.368979670Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:23.370107 containerd[1790]: time="2025-01-30T13:23:23.370070197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:23.370393 containerd[1790]: time="2025-01-30T13:23:23.370382753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.330727046s" Jan 30 13:23:23.370415 containerd[1790]: time="2025-01-30T13:23:23.370396475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:23:23.371363 containerd[1790]: time="2025-01-30T13:23:23.371350353Z" level=info msg="CreateContainer within sandbox \"55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:23:23.375827 containerd[1790]: time="2025-01-30T13:23:23.375809285Z" level=info msg="CreateContainer within sandbox \"55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758\"" Jan 30 13:23:23.376036 containerd[1790]: time="2025-01-30T13:23:23.376024507Z" level=info msg="StartContainer for \"006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758\"" Jan 30 13:23:23.424719 systemd[1]: Started cri-containerd-006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758.scope - libcontainer container 006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758. Jan 30 13:23:23.449054 containerd[1790]: time="2025-01-30T13:23:23.449020377Z" level=info msg="StartContainer for \"006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758\" returns successfully" Jan 30 13:23:23.958229 systemd[1]: cri-containerd-006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758.scope: Deactivated successfully. Jan 30 13:23:23.970950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758-rootfs.mount: Deactivated successfully. Jan 30 13:23:24.058001 kubelet[3262]: I0130 13:23:24.057936 3262 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:23:24.093844 systemd[1]: Created slice kubepods-besteffort-podb216c652_2303_4f3f_bb63_04ccb5f59378.slice - libcontainer container kubepods-besteffort-podb216c652_2303_4f3f_bb63_04ccb5f59378.slice. Jan 30 13:23:24.097357 kubelet[3262]: I0130 13:23:24.097129 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-794969564d-tkq69" podStartSLOduration=4.224000235 podStartE2EDuration="7.097076425s" podCreationTimestamp="2025-01-30 13:23:17 +0000 UTC" firstStartedPulling="2025-01-30 13:23:18.166517562 +0000 UTC m=+20.131938092" lastFinishedPulling="2025-01-30 13:23:21.039593752 +0000 UTC m=+23.005014282" observedRunningTime="2025-01-30 13:23:21.138293182 +0000 UTC m=+23.103713734" watchObservedRunningTime="2025-01-30 13:23:24.097076425 +0000 UTC m=+26.062497046" Jan 30 13:23:24.099763 kubelet[3262]: I0130 13:23:24.099626 3262 topology_manager.go:215] "Topology Admit Handler" podUID="24717288-c144-4929-abc3-3991af241c87" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:24.100757 kubelet[3262]: I0130 13:23:24.100690 3262 topology_manager.go:215] "Topology Admit Handler" podUID="e0884c07-f701-4d19-90b6-f7fc2b65a03a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:24.100994 containerd[1790]: time="2025-01-30T13:23:24.100920960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:0,}" Jan 30 13:23:24.101960 kubelet[3262]: I0130 13:23:24.101904 3262 topology_manager.go:215] "Topology Admit Handler" podUID="6f232251-0174-4a52-bf07-5a185bd64bf0" podNamespace="calico-system" podName="calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:24.103045 kubelet[3262]: I0130 13:23:24.102965 3262 topology_manager.go:215] "Topology Admit Handler" podUID="8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9" podNamespace="calico-apiserver" podName="calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:24.104073 kubelet[3262]: I0130 13:23:24.104018 3262 topology_manager.go:215] "Topology Admit Handler" podUID="b8bd5def-dd66-4c62-af8b-6e8cab979050" podNamespace="calico-apiserver" podName="calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:24.115749 systemd[1]: Created slice kubepods-burstable-pod24717288_c144_4929_abc3_3991af241c87.slice - libcontainer container kubepods-burstable-pod24717288_c144_4929_abc3_3991af241c87.slice. Jan 30 13:23:24.132211 systemd[1]: Created slice kubepods-burstable-pode0884c07_f701_4d19_90b6_f7fc2b65a03a.slice - libcontainer container kubepods-burstable-pode0884c07_f701_4d19_90b6_f7fc2b65a03a.slice. Jan 30 13:23:24.136593 systemd[1]: Created slice kubepods-besteffort-pod6f232251_0174_4a52_bf07_5a185bd64bf0.slice - libcontainer container kubepods-besteffort-pod6f232251_0174_4a52_bf07_5a185bd64bf0.slice. Jan 30 13:23:24.139028 systemd[1]: Created slice kubepods-besteffort-pod8c3dd57d_2dde_4663_8d6a_5bcd3da5e6a9.slice - libcontainer container kubepods-besteffort-pod8c3dd57d_2dde_4663_8d6a_5bcd3da5e6a9.slice. Jan 30 13:23:24.141221 systemd[1]: Created slice kubepods-besteffort-podb8bd5def_dd66_4c62_af8b_6e8cab979050.slice - libcontainer container kubepods-besteffort-podb8bd5def_dd66_4c62_af8b_6e8cab979050.slice. Jan 30 13:23:24.162568 kubelet[3262]: I0130 13:23:24.162548 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd2w\" (UniqueName: \"kubernetes.io/projected/8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9-kube-api-access-2gd2w\") pod \"calico-apiserver-57f78c5dd9-226tt\" (UID: \"8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9\") " pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:24.162568 kubelet[3262]: I0130 13:23:24.162572 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj8q\" (UniqueName: \"kubernetes.io/projected/e0884c07-f701-4d19-90b6-f7fc2b65a03a-kube-api-access-5gj8q\") pod \"coredns-7db6d8ff4d-ckrtn\" (UID: \"e0884c07-f701-4d19-90b6-f7fc2b65a03a\") " pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:24.162687 kubelet[3262]: I0130 13:23:24.162583 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqdnq\" (UniqueName: \"kubernetes.io/projected/6f232251-0174-4a52-bf07-5a185bd64bf0-kube-api-access-vqdnq\") pod \"calico-kube-controllers-6b9c4bc7cd-rnfv6\" (UID: \"6f232251-0174-4a52-bf07-5a185bd64bf0\") " pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:24.162687 kubelet[3262]: I0130 13:23:24.162595 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f232251-0174-4a52-bf07-5a185bd64bf0-tigera-ca-bundle\") pod \"calico-kube-controllers-6b9c4bc7cd-rnfv6\" (UID: \"6f232251-0174-4a52-bf07-5a185bd64bf0\") " pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:24.162687 kubelet[3262]: I0130 13:23:24.162623 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nqpl\" (UniqueName: \"kubernetes.io/projected/24717288-c144-4929-abc3-3991af241c87-kube-api-access-8nqpl\") pod \"coredns-7db6d8ff4d-bqbqv\" (UID: \"24717288-c144-4929-abc3-3991af241c87\") " pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:24.162687 kubelet[3262]: I0130 13:23:24.162639 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0884c07-f701-4d19-90b6-f7fc2b65a03a-config-volume\") pod \"coredns-7db6d8ff4d-ckrtn\" (UID: \"e0884c07-f701-4d19-90b6-f7fc2b65a03a\") " pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:24.162687 kubelet[3262]: I0130 13:23:24.162650 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wdg\" (UniqueName: \"kubernetes.io/projected/b8bd5def-dd66-4c62-af8b-6e8cab979050-kube-api-access-68wdg\") pod \"calico-apiserver-57f78c5dd9-9rtwp\" (UID: \"b8bd5def-dd66-4c62-af8b-6e8cab979050\") " pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:24.162833 kubelet[3262]: I0130 13:23:24.162670 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24717288-c144-4929-abc3-3991af241c87-config-volume\") pod \"coredns-7db6d8ff4d-bqbqv\" (UID: \"24717288-c144-4929-abc3-3991af241c87\") " pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:24.162833 kubelet[3262]: I0130 13:23:24.162680 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9-calico-apiserver-certs\") pod \"calico-apiserver-57f78c5dd9-226tt\" (UID: \"8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9\") " pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:24.162833 kubelet[3262]: I0130 13:23:24.162699 3262 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8bd5def-dd66-4c62-af8b-6e8cab979050-calico-apiserver-certs\") pod \"calico-apiserver-57f78c5dd9-9rtwp\" (UID: \"b8bd5def-dd66-4c62-af8b-6e8cab979050\") " pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:24.423836 containerd[1790]: time="2025-01-30T13:23:24.423777583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:0,}" Jan 30 13:23:24.436609 containerd[1790]: time="2025-01-30T13:23:24.436537933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:0,}" Jan 30 13:23:24.438302 containerd[1790]: time="2025-01-30T13:23:24.438235036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:0,}" Jan 30 13:23:24.440950 containerd[1790]: time="2025-01-30T13:23:24.440878348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:23:24.443727 containerd[1790]: time="2025-01-30T13:23:24.443656285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:23:24.573524 containerd[1790]: time="2025-01-30T13:23:24.573463723Z" level=error msg="Failed to destroy network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.573685 containerd[1790]: time="2025-01-30T13:23:24.573670281Z" level=error msg="encountered an error cleaning up failed sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.573723 containerd[1790]: time="2025-01-30T13:23:24.573707449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.573916 kubelet[3262]: E0130 13:23:24.573885 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.573958 kubelet[3262]: E0130 13:23:24.573934 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:24.573958 kubelet[3262]: E0130 13:23:24.573948 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:24.574019 kubelet[3262]: E0130 13:23:24.573974 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:24.574976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c-shm.mount: Deactivated successfully. Jan 30 13:23:24.603434 containerd[1790]: time="2025-01-30T13:23:24.603394825Z" level=info msg="shim disconnected" id=006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758 namespace=k8s.io Jan 30 13:23:24.603434 containerd[1790]: time="2025-01-30T13:23:24.603427772Z" level=warning msg="cleaning up after shim disconnected" id=006a53a198ad99fafe194f5acef1ef2777347bcef2bdb2069e5b1ff1394d3758 namespace=k8s.io Jan 30 13:23:24.603434 containerd[1790]: time="2025-01-30T13:23:24.603432987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:23:24.643210 containerd[1790]: time="2025-01-30T13:23:24.643178207Z" level=error msg="Failed to destroy network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643303 containerd[1790]: time="2025-01-30T13:23:24.643269721Z" level=error msg="Failed to destroy network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643438 containerd[1790]: time="2025-01-30T13:23:24.643418808Z" level=error msg="encountered an error cleaning up failed sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643570 containerd[1790]: time="2025-01-30T13:23:24.643472749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643570 containerd[1790]: time="2025-01-30T13:23:24.643429343Z" level=error msg="encountered an error cleaning up failed sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643685 containerd[1790]: time="2025-01-30T13:23:24.643582245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643735 kubelet[3262]: E0130 13:23:24.643690 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643766 kubelet[3262]: E0130 13:23:24.643736 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:24.643766 kubelet[3262]: E0130 13:23:24.643749 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:24.643819 kubelet[3262]: E0130 13:23:24.643690 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.643819 kubelet[3262]: E0130 13:23:24.643809 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:24.643880 kubelet[3262]: E0130 13:23:24.643829 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:24.643880 kubelet[3262]: E0130 13:23:24.643857 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" podUID="b8bd5def-dd66-4c62-af8b-6e8cab979050" Jan 30 13:23:24.643936 kubelet[3262]: E0130 13:23:24.643777 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" podUID="8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9" Jan 30 13:23:24.645109 containerd[1790]: time="2025-01-30T13:23:24.645086472Z" level=error msg="Failed to destroy network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645271 containerd[1790]: time="2025-01-30T13:23:24.645256699Z" level=error msg="encountered an error cleaning up failed sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645312 containerd[1790]: time="2025-01-30T13:23:24.645268455Z" level=error msg="Failed to destroy network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645345 containerd[1790]: time="2025-01-30T13:23:24.645305320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645423 kubelet[3262]: E0130 13:23:24.645405 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645450 kubelet[3262]: E0130 13:23:24.645436 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:24.645475 containerd[1790]: time="2025-01-30T13:23:24.645419065Z" level=error msg="Failed to destroy network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645475 containerd[1790]: time="2025-01-30T13:23:24.645429378Z" level=error msg="encountered an error cleaning up failed sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645522 kubelet[3262]: E0130 13:23:24.645453 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:24.645522 kubelet[3262]: E0130 13:23:24.645491 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" podUID="6f232251-0174-4a52-bf07-5a185bd64bf0" Jan 30 13:23:24.645575 containerd[1790]: time="2025-01-30T13:23:24.645495503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645596 kubelet[3262]: E0130 13:23:24.645562 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645596 kubelet[3262]: E0130 13:23:24.645580 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:24.645596 kubelet[3262]: E0130 13:23:24.645591 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:24.645654 kubelet[3262]: E0130 13:23:24.645608 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ckrtn" podUID="e0884c07-f701-4d19-90b6-f7fc2b65a03a" Jan 30 13:23:24.645690 containerd[1790]: time="2025-01-30T13:23:24.645588243Z" level=error msg="encountered an error cleaning up failed sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645690 containerd[1790]: time="2025-01-30T13:23:24.645612604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645727 kubelet[3262]: E0130 13:23:24.645669 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:24.645727 kubelet[3262]: E0130 13:23:24.645684 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:24.645727 kubelet[3262]: E0130 13:23:24.645698 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:24.645781 kubelet[3262]: E0130 13:23:24.645713 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bqbqv" podUID="24717288-c144-4929-abc3-3991af241c87" Jan 30 13:23:25.140783 kubelet[3262]: I0130 13:23:25.140704 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f" Jan 30 13:23:25.142303 containerd[1790]: time="2025-01-30T13:23:25.142220300Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:25.142898 containerd[1790]: time="2025-01-30T13:23:25.142832181Z" level=info msg="Ensure that sandbox 605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f in task-service has been cleanup successfully" Jan 30 13:23:25.143114 kubelet[3262]: I0130 13:23:25.143063 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2" Jan 30 13:23:25.143356 containerd[1790]: time="2025-01-30T13:23:25.143245350Z" level=info msg="TearDown network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" successfully" Jan 30 13:23:25.143356 containerd[1790]: time="2025-01-30T13:23:25.143300898Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" returns successfully" Jan 30 13:23:25.144335 containerd[1790]: time="2025-01-30T13:23:25.144257630Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:25.144535 containerd[1790]: time="2025-01-30T13:23:25.144347751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:1,}" Jan 30 13:23:25.144861 containerd[1790]: time="2025-01-30T13:23:25.144803296Z" level=info msg="Ensure that sandbox 656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2 in task-service has been cleanup successfully" Jan 30 13:23:25.145146 kubelet[3262]: I0130 13:23:25.145085 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422" Jan 30 13:23:25.145287 containerd[1790]: time="2025-01-30T13:23:25.145231847Z" level=info msg="TearDown network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" successfully" Jan 30 13:23:25.145287 containerd[1790]: time="2025-01-30T13:23:25.145275624Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" returns successfully" Jan 30 13:23:25.145628 containerd[1790]: time="2025-01-30T13:23:25.145597580Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:25.145664 containerd[1790]: time="2025-01-30T13:23:25.145602291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:1,}" Jan 30 13:23:25.145742 containerd[1790]: time="2025-01-30T13:23:25.145729262Z" level=info msg="Ensure that sandbox 26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422 in task-service has been cleanup successfully" Jan 30 13:23:25.145825 containerd[1790]: time="2025-01-30T13:23:25.145815602Z" level=info msg="TearDown network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" successfully" Jan 30 13:23:25.145825 containerd[1790]: time="2025-01-30T13:23:25.145824208Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" returns successfully" Jan 30 13:23:25.145869 kubelet[3262]: I0130 13:23:25.145829 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9" Jan 30 13:23:25.146000 containerd[1790]: time="2025-01-30T13:23:25.145989941Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:25.146072 containerd[1790]: time="2025-01-30T13:23:25.146062828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:23:25.146101 containerd[1790]: time="2025-01-30T13:23:25.146092270Z" level=info msg="Ensure that sandbox 4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9 in task-service has been cleanup successfully" Jan 30 13:23:25.146176 containerd[1790]: time="2025-01-30T13:23:25.146164335Z" level=info msg="TearDown network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" successfully" Jan 30 13:23:25.146176 containerd[1790]: time="2025-01-30T13:23:25.146174325Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" returns successfully" Jan 30 13:23:25.146275 kubelet[3262]: I0130 13:23:25.146268 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c" Jan 30 13:23:25.146377 containerd[1790]: time="2025-01-30T13:23:25.146361239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:1,}" Jan 30 13:23:25.146508 containerd[1790]: time="2025-01-30T13:23:25.146498274Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:25.146582 containerd[1790]: time="2025-01-30T13:23:25.146574215Z" level=info msg="Ensure that sandbox 5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c in task-service has been cleanup successfully" Jan 30 13:23:25.146672 containerd[1790]: time="2025-01-30T13:23:25.146662431Z" level=info msg="TearDown network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" successfully" Jan 30 13:23:25.146672 containerd[1790]: time="2025-01-30T13:23:25.146670697Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" returns successfully" Jan 30 13:23:25.146828 containerd[1790]: time="2025-01-30T13:23:25.146818239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:1,}" Jan 30 13:23:25.147261 kubelet[3262]: I0130 13:23:25.147253 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38" Jan 30 13:23:25.147295 containerd[1790]: time="2025-01-30T13:23:25.147286748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:23:25.147423 containerd[1790]: time="2025-01-30T13:23:25.147413817Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:25.147522 containerd[1790]: time="2025-01-30T13:23:25.147498364Z" level=info msg="Ensure that sandbox 5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38 in task-service has been cleanup successfully" Jan 30 13:23:25.147607 containerd[1790]: time="2025-01-30T13:23:25.147597053Z" level=info msg="TearDown network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" successfully" Jan 30 13:23:25.147607 containerd[1790]: time="2025-01-30T13:23:25.147604764Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" returns successfully" Jan 30 13:23:25.147782 containerd[1790]: time="2025-01-30T13:23:25.147769952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:23:25.188242 containerd[1790]: time="2025-01-30T13:23:25.188200709Z" level=error msg="Failed to destroy network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.188547 containerd[1790]: time="2025-01-30T13:23:25.188531671Z" level=error msg="encountered an error cleaning up failed sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.188602 containerd[1790]: time="2025-01-30T13:23:25.188589668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.188761 kubelet[3262]: E0130 13:23:25.188736 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.188979 kubelet[3262]: E0130 13:23:25.188778 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:25.188979 kubelet[3262]: E0130 13:23:25.188796 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:25.188979 kubelet[3262]: E0130 13:23:25.188828 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" podUID="b8bd5def-dd66-4c62-af8b-6e8cab979050" Jan 30 13:23:25.189757 containerd[1790]: time="2025-01-30T13:23:25.189739175Z" level=error msg="Failed to destroy network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.189797 containerd[1790]: time="2025-01-30T13:23:25.189739354Z" level=error msg="Failed to destroy network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.189968 containerd[1790]: time="2025-01-30T13:23:25.189955047Z" level=error msg="encountered an error cleaning up failed sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.189998 containerd[1790]: time="2025-01-30T13:23:25.189972509Z" level=error msg="encountered an error cleaning up failed sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.189998 containerd[1790]: time="2025-01-30T13:23:25.189983635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190039 containerd[1790]: time="2025-01-30T13:23:25.190004107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190094 kubelet[3262]: E0130 13:23:25.190080 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190094 kubelet[3262]: E0130 13:23:25.190085 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190163 kubelet[3262]: E0130 13:23:25.190104 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:25.190163 kubelet[3262]: E0130 13:23:25.190110 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:25.190163 kubelet[3262]: E0130 13:23:25.190116 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:25.190163 kubelet[3262]: E0130 13:23:25.190126 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:25.190245 kubelet[3262]: E0130 13:23:25.190138 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ckrtn" podUID="e0884c07-f701-4d19-90b6-f7fc2b65a03a" Jan 30 13:23:25.190245 kubelet[3262]: E0130 13:23:25.190153 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" podUID="6f232251-0174-4a52-bf07-5a185bd64bf0" Jan 30 13:23:25.190305 containerd[1790]: time="2025-01-30T13:23:25.190283772Z" level=error msg="Failed to destroy network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190344 containerd[1790]: time="2025-01-30T13:23:25.190325991Z" level=error msg="Failed to destroy network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190435 containerd[1790]: time="2025-01-30T13:23:25.190423650Z" level=error msg="encountered an error cleaning up failed sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190458 containerd[1790]: time="2025-01-30T13:23:25.190446824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190487 containerd[1790]: time="2025-01-30T13:23:25.190470016Z" level=error msg="encountered an error cleaning up failed sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190507 containerd[1790]: time="2025-01-30T13:23:25.190498206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190594 kubelet[3262]: E0130 13:23:25.190550 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190633 kubelet[3262]: E0130 13:23:25.190597 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.190633 kubelet[3262]: E0130 13:23:25.190623 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:25.190726 kubelet[3262]: E0130 13:23:25.190637 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:25.190726 kubelet[3262]: E0130 13:23:25.190600 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:25.190726 kubelet[3262]: E0130 13:23:25.190693 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:25.190795 kubelet[3262]: E0130 13:23:25.190693 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:25.190795 kubelet[3262]: E0130 13:23:25.190712 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bqbqv" podUID="24717288-c144-4929-abc3-3991af241c87" Jan 30 13:23:25.193298 containerd[1790]: time="2025-01-30T13:23:25.193258373Z" level=error msg="Failed to destroy network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.193454 containerd[1790]: time="2025-01-30T13:23:25.193442605Z" level=error msg="encountered an error cleaning up failed sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.193526 containerd[1790]: time="2025-01-30T13:23:25.193484045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.193632 kubelet[3262]: E0130 13:23:25.193592 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:25.193632 kubelet[3262]: E0130 13:23:25.193620 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:25.193677 kubelet[3262]: E0130 13:23:25.193632 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:25.193677 kubelet[3262]: E0130 13:23:25.193653 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" podUID="8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9" Jan 30 13:23:25.378181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422-shm.mount: Deactivated successfully. Jan 30 13:23:25.378264 systemd[1]: run-netns-cni\x2d2b3b0036\x2d3c4c\x2ddebb\x2ddbab\x2d0f22d6fb441d.mount: Deactivated successfully. Jan 30 13:23:26.149744 kubelet[3262]: I0130 13:23:26.149729 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf" Jan 30 13:23:26.150108 containerd[1790]: time="2025-01-30T13:23:26.150047414Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" Jan 30 13:23:26.150313 containerd[1790]: time="2025-01-30T13:23:26.150222270Z" level=info msg="Ensure that sandbox 2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf in task-service has been cleanup successfully" Jan 30 13:23:26.150350 containerd[1790]: time="2025-01-30T13:23:26.150341477Z" level=info msg="TearDown network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" successfully" Jan 30 13:23:26.150368 containerd[1790]: time="2025-01-30T13:23:26.150349815Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" returns successfully" Jan 30 13:23:26.150469 kubelet[3262]: I0130 13:23:26.150460 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772" Jan 30 13:23:26.150512 containerd[1790]: time="2025-01-30T13:23:26.150460364Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:26.150554 containerd[1790]: time="2025-01-30T13:23:26.150520364Z" level=info msg="TearDown network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" successfully" Jan 30 13:23:26.150593 containerd[1790]: time="2025-01-30T13:23:26.150553695Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" returns successfully" Jan 30 13:23:26.150742 containerd[1790]: time="2025-01-30T13:23:26.150730491Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" Jan 30 13:23:26.150765 containerd[1790]: time="2025-01-30T13:23:26.150753374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:2,}" Jan 30 13:23:26.150863 containerd[1790]: time="2025-01-30T13:23:26.150850878Z" level=info msg="Ensure that sandbox 9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772 in task-service has been cleanup successfully" Jan 30 13:23:26.150959 containerd[1790]: time="2025-01-30T13:23:26.150948437Z" level=info msg="TearDown network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" successfully" Jan 30 13:23:26.150977 containerd[1790]: time="2025-01-30T13:23:26.150960443Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" returns successfully" Jan 30 13:23:26.151057 kubelet[3262]: I0130 13:23:26.151045 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9" Jan 30 13:23:26.151105 containerd[1790]: time="2025-01-30T13:23:26.151094337Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:26.151158 containerd[1790]: time="2025-01-30T13:23:26.151137347Z" level=info msg="TearDown network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" successfully" Jan 30 13:23:26.151186 containerd[1790]: time="2025-01-30T13:23:26.151157664Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" returns successfully" Jan 30 13:23:26.151256 containerd[1790]: time="2025-01-30T13:23:26.151242740Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" Jan 30 13:23:26.151345 containerd[1790]: time="2025-01-30T13:23:26.151333406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:23:26.151383 containerd[1790]: time="2025-01-30T13:23:26.151372695Z" level=info msg="Ensure that sandbox 3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9 in task-service has been cleanup successfully" Jan 30 13:23:26.151472 containerd[1790]: time="2025-01-30T13:23:26.151460556Z" level=info msg="TearDown network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" successfully" Jan 30 13:23:26.151506 containerd[1790]: time="2025-01-30T13:23:26.151471984Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" returns successfully" Jan 30 13:23:26.151584 containerd[1790]: time="2025-01-30T13:23:26.151574644Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:26.151611 kubelet[3262]: I0130 13:23:26.151588 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943" Jan 30 13:23:26.151633 containerd[1790]: time="2025-01-30T13:23:26.151614912Z" level=info msg="TearDown network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" successfully" Jan 30 13:23:26.151633 containerd[1790]: time="2025-01-30T13:23:26.151621460Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" returns successfully" Jan 30 13:23:26.151777 containerd[1790]: time="2025-01-30T13:23:26.151764552Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" Jan 30 13:23:26.151847 containerd[1790]: time="2025-01-30T13:23:26.151837466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:2,}" Jan 30 13:23:26.151923 containerd[1790]: time="2025-01-30T13:23:26.151911352Z" level=info msg="Ensure that sandbox 59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943 in task-service has been cleanup successfully" Jan 30 13:23:26.152034 containerd[1790]: time="2025-01-30T13:23:26.152020313Z" level=info msg="TearDown network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" successfully" Jan 30 13:23:26.152034 containerd[1790]: time="2025-01-30T13:23:26.152032054Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" returns successfully" Jan 30 13:23:26.152053 systemd[1]: run-netns-cni\x2d66447419\x2d07ba\x2dd828\x2d9a77\x2dfbf501bd044a.mount: Deactivated successfully. Jan 30 13:23:26.152268 containerd[1790]: time="2025-01-30T13:23:26.152162284Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:26.152268 containerd[1790]: time="2025-01-30T13:23:26.152213879Z" level=info msg="TearDown network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" successfully" Jan 30 13:23:26.152268 containerd[1790]: time="2025-01-30T13:23:26.152224284Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" returns successfully" Jan 30 13:23:26.152332 kubelet[3262]: I0130 13:23:26.152191 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0" Jan 30 13:23:26.152491 containerd[1790]: time="2025-01-30T13:23:26.152472122Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" Jan 30 13:23:26.152576 containerd[1790]: time="2025-01-30T13:23:26.152499537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:2,}" Jan 30 13:23:26.152638 containerd[1790]: time="2025-01-30T13:23:26.152586747Z" level=info msg="Ensure that sandbox 97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0 in task-service has been cleanup successfully" Jan 30 13:23:26.152703 containerd[1790]: time="2025-01-30T13:23:26.152689855Z" level=info msg="TearDown network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" successfully" Jan 30 13:23:26.152737 containerd[1790]: time="2025-01-30T13:23:26.152702634Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" returns successfully" Jan 30 13:23:26.152810 kubelet[3262]: I0130 13:23:26.152801 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529" Jan 30 13:23:26.152845 containerd[1790]: time="2025-01-30T13:23:26.152815312Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:26.152875 containerd[1790]: time="2025-01-30T13:23:26.152865057Z" level=info msg="TearDown network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" successfully" Jan 30 13:23:26.152906 containerd[1790]: time="2025-01-30T13:23:26.152874380Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" returns successfully" Jan 30 13:23:26.152971 containerd[1790]: time="2025-01-30T13:23:26.152962183Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" Jan 30 13:23:26.153069 containerd[1790]: time="2025-01-30T13:23:26.153057170Z" level=info msg="Ensure that sandbox a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529 in task-service has been cleanup successfully" Jan 30 13:23:26.153107 containerd[1790]: time="2025-01-30T13:23:26.153058899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:23:26.153138 containerd[1790]: time="2025-01-30T13:23:26.153132598Z" level=info msg="TearDown network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" successfully" Jan 30 13:23:26.153166 containerd[1790]: time="2025-01-30T13:23:26.153140031Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" returns successfully" Jan 30 13:23:26.153255 containerd[1790]: time="2025-01-30T13:23:26.153244650Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:26.153304 containerd[1790]: time="2025-01-30T13:23:26.153283454Z" level=info msg="TearDown network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" successfully" Jan 30 13:23:26.153329 containerd[1790]: time="2025-01-30T13:23:26.153303841Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" returns successfully" Jan 30 13:23:26.153486 containerd[1790]: time="2025-01-30T13:23:26.153471555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:2,}" Jan 30 13:23:26.154068 systemd[1]: run-netns-cni\x2d4201640f\x2db43e\x2dd931\x2d56cc\x2de54616ca69b9.mount: Deactivated successfully. Jan 30 13:23:26.154118 systemd[1]: run-netns-cni\x2d8355157f\x2d289e\x2d28a8\x2d8e93\x2d58c734d7b7f8.mount: Deactivated successfully. Jan 30 13:23:26.154153 systemd[1]: run-netns-cni\x2d2c569ef7\x2df9ee\x2d2122\x2d6114\x2d2191099d8231.mount: Deactivated successfully. Jan 30 13:23:26.154187 systemd[1]: run-netns-cni\x2dc06fe879\x2d802b\x2d8d0f\x2d93a7\x2dfc52283464f7.mount: Deactivated successfully. Jan 30 13:23:26.156648 systemd[1]: run-netns-cni\x2d88e0a722\x2d6904\x2d96cd\x2dba40\x2dcbc2fac04508.mount: Deactivated successfully. Jan 30 13:23:26.195944 containerd[1790]: time="2025-01-30T13:23:26.195910753Z" level=error msg="Failed to destroy network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196121 containerd[1790]: time="2025-01-30T13:23:26.196108508Z" level=error msg="encountered an error cleaning up failed sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196155 containerd[1790]: time="2025-01-30T13:23:26.196144792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196322 kubelet[3262]: E0130 13:23:26.196296 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196356 containerd[1790]: time="2025-01-30T13:23:26.196305628Z" level=error msg="Failed to destroy network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196382 kubelet[3262]: E0130 13:23:26.196340 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:26.196382 kubelet[3262]: E0130 13:23:26.196354 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:26.196432 containerd[1790]: time="2025-01-30T13:23:26.196308487Z" level=error msg="Failed to destroy network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196467 kubelet[3262]: E0130 13:23:26.196383 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" podUID="8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9" Jan 30 13:23:26.196528 containerd[1790]: time="2025-01-30T13:23:26.196473680Z" level=error msg="encountered an error cleaning up failed sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196528 containerd[1790]: time="2025-01-30T13:23:26.196506313Z" level=error msg="encountered an error cleaning up failed sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196588 containerd[1790]: time="2025-01-30T13:23:26.196541934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196623 containerd[1790]: time="2025-01-30T13:23:26.196510148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196741 kubelet[3262]: E0130 13:23:26.196675 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196741 kubelet[3262]: E0130 13:23:26.196706 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:26.196741 kubelet[3262]: E0130 13:23:26.196721 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:26.196824 kubelet[3262]: E0130 13:23:26.196744 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" podUID="6f232251-0174-4a52-bf07-5a185bd64bf0" Jan 30 13:23:26.196824 kubelet[3262]: E0130 13:23:26.196774 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.196824 kubelet[3262]: E0130 13:23:26.196803 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:26.196912 kubelet[3262]: E0130 13:23:26.196819 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:26.196912 kubelet[3262]: E0130 13:23:26.196845 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bqbqv" podUID="24717288-c144-4929-abc3-3991af241c87" Jan 30 13:23:26.197124 containerd[1790]: time="2025-01-30T13:23:26.197106527Z" level=error msg="Failed to destroy network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197163 containerd[1790]: time="2025-01-30T13:23:26.197147687Z" level=error msg="Failed to destroy network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197288 containerd[1790]: time="2025-01-30T13:23:26.197274033Z" level=error msg="encountered an error cleaning up failed sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197324 containerd[1790]: time="2025-01-30T13:23:26.197287032Z" level=error msg="Failed to destroy network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197324 containerd[1790]: time="2025-01-30T13:23:26.197302591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197383 containerd[1790]: time="2025-01-30T13:23:26.197283142Z" level=error msg="encountered an error cleaning up failed sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197404 containerd[1790]: time="2025-01-30T13:23:26.197378858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197433 kubelet[3262]: E0130 13:23:26.197381 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197433 kubelet[3262]: E0130 13:23:26.197400 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:26.197433 kubelet[3262]: E0130 13:23:26.197410 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:26.197536 containerd[1790]: time="2025-01-30T13:23:26.197422254Z" level=error msg="encountered an error cleaning up failed sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197536 containerd[1790]: time="2025-01-30T13:23:26.197445435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197593 kubelet[3262]: E0130 13:23:26.197427 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ckrtn" podUID="e0884c07-f701-4d19-90b6-f7fc2b65a03a" Jan 30 13:23:26.197593 kubelet[3262]: E0130 13:23:26.197440 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197593 kubelet[3262]: E0130 13:23:26.197467 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:26.197661 kubelet[3262]: E0130 13:23:26.197494 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:26.197661 kubelet[3262]: E0130 13:23:26.197521 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:26.197661 kubelet[3262]: E0130 13:23:26.197531 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:26.197724 kubelet[3262]: E0130 13:23:26.197544 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:26.197724 kubelet[3262]: E0130 13:23:26.197553 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:26.197724 kubelet[3262]: E0130 13:23:26.197565 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" podUID="b8bd5def-dd66-4c62-af8b-6e8cab979050" Jan 30 13:23:26.377023 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6-shm.mount: Deactivated successfully. Jan 30 13:23:27.155195 kubelet[3262]: I0130 13:23:27.155176 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772" Jan 30 13:23:27.155565 containerd[1790]: time="2025-01-30T13:23:27.155545182Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\"" Jan 30 13:23:27.155722 containerd[1790]: time="2025-01-30T13:23:27.155692876Z" level=info msg="Ensure that sandbox e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772 in task-service has been cleanup successfully" Jan 30 13:23:27.155817 containerd[1790]: time="2025-01-30T13:23:27.155802725Z" level=info msg="TearDown network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" successfully" Jan 30 13:23:27.155858 containerd[1790]: time="2025-01-30T13:23:27.155816284Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" returns successfully" Jan 30 13:23:27.155903 kubelet[3262]: I0130 13:23:27.155891 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51" Jan 30 13:23:27.155991 containerd[1790]: time="2025-01-30T13:23:27.155978289Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" Jan 30 13:23:27.156059 containerd[1790]: time="2025-01-30T13:23:27.156028561Z" level=info msg="TearDown network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" successfully" Jan 30 13:23:27.156088 containerd[1790]: time="2025-01-30T13:23:27.156059365Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" returns successfully" Jan 30 13:23:27.156117 containerd[1790]: time="2025-01-30T13:23:27.156101701Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\"" Jan 30 13:23:27.156218 containerd[1790]: time="2025-01-30T13:23:27.156208283Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:27.156255 containerd[1790]: time="2025-01-30T13:23:27.156210093Z" level=info msg="Ensure that sandbox 83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51 in task-service has been cleanup successfully" Jan 30 13:23:27.156289 containerd[1790]: time="2025-01-30T13:23:27.156249425Z" level=info msg="TearDown network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" successfully" Jan 30 13:23:27.156289 containerd[1790]: time="2025-01-30T13:23:27.156273431Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" returns successfully" Jan 30 13:23:27.156350 containerd[1790]: time="2025-01-30T13:23:27.156339911Z" level=info msg="TearDown network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" successfully" Jan 30 13:23:27.156375 containerd[1790]: time="2025-01-30T13:23:27.156351136Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" returns successfully" Jan 30 13:23:27.156533 containerd[1790]: time="2025-01-30T13:23:27.156520327Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" Jan 30 13:23:27.156593 containerd[1790]: time="2025-01-30T13:23:27.156580201Z" level=info msg="TearDown network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" successfully" Jan 30 13:23:27.156644 containerd[1790]: time="2025-01-30T13:23:27.156591840Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" returns successfully" Jan 30 13:23:27.156644 containerd[1790]: time="2025-01-30T13:23:27.156621946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:3,}" Jan 30 13:23:27.156702 kubelet[3262]: I0130 13:23:27.156636 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47" Jan 30 13:23:27.156805 containerd[1790]: time="2025-01-30T13:23:27.156789767Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:27.156872 containerd[1790]: time="2025-01-30T13:23:27.156841064Z" level=info msg="TearDown network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" successfully" Jan 30 13:23:27.156911 containerd[1790]: time="2025-01-30T13:23:27.156871203Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" returns successfully" Jan 30 13:23:27.156911 containerd[1790]: time="2025-01-30T13:23:27.156881438Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\"" Jan 30 13:23:27.156995 containerd[1790]: time="2025-01-30T13:23:27.156984431Z" level=info msg="Ensure that sandbox 6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47 in task-service has been cleanup successfully" Jan 30 13:23:27.157070 containerd[1790]: time="2025-01-30T13:23:27.157061505Z" level=info msg="TearDown network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" successfully" Jan 30 13:23:27.157094 containerd[1790]: time="2025-01-30T13:23:27.157070809Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" returns successfully" Jan 30 13:23:27.157094 containerd[1790]: time="2025-01-30T13:23:27.157073890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:3,}" Jan 30 13:23:27.157219 containerd[1790]: time="2025-01-30T13:23:27.157205181Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" Jan 30 13:23:27.157265 containerd[1790]: time="2025-01-30T13:23:27.157256210Z" level=info msg="TearDown network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" successfully" Jan 30 13:23:27.157289 containerd[1790]: time="2025-01-30T13:23:27.157265933Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" returns successfully" Jan 30 13:23:27.157392 containerd[1790]: time="2025-01-30T13:23:27.157383738Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:27.157428 containerd[1790]: time="2025-01-30T13:23:27.157420932Z" level=info msg="TearDown network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" successfully" Jan 30 13:23:27.157458 containerd[1790]: time="2025-01-30T13:23:27.157427225Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" returns successfully" Jan 30 13:23:27.157495 kubelet[3262]: I0130 13:23:27.157432 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea" Jan 30 13:23:27.157627 systemd[1]: run-netns-cni\x2de9be3668\x2dae43\x2d489f\x2ddc1d\x2dea78f034c98a.mount: Deactivated successfully. Jan 30 13:23:27.157799 containerd[1790]: time="2025-01-30T13:23:27.157621447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:23:27.157799 containerd[1790]: time="2025-01-30T13:23:27.157651612Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\"" Jan 30 13:23:27.157799 containerd[1790]: time="2025-01-30T13:23:27.157773432Z" level=info msg="Ensure that sandbox d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea in task-service has been cleanup successfully" Jan 30 13:23:27.157887 containerd[1790]: time="2025-01-30T13:23:27.157875790Z" level=info msg="TearDown network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" successfully" Jan 30 13:23:27.157918 containerd[1790]: time="2025-01-30T13:23:27.157887091Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" returns successfully" Jan 30 13:23:27.158005 containerd[1790]: time="2025-01-30T13:23:27.157993899Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" Jan 30 13:23:27.158070 containerd[1790]: time="2025-01-30T13:23:27.158047278Z" level=info msg="TearDown network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" successfully" Jan 30 13:23:27.158107 containerd[1790]: time="2025-01-30T13:23:27.158069409Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" returns successfully" Jan 30 13:23:27.158127 kubelet[3262]: I0130 13:23:27.158117 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6" Jan 30 13:23:27.158206 containerd[1790]: time="2025-01-30T13:23:27.158196057Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:27.158246 containerd[1790]: time="2025-01-30T13:23:27.158236833Z" level=info msg="TearDown network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" successfully" Jan 30 13:23:27.158271 containerd[1790]: time="2025-01-30T13:23:27.158246837Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" returns successfully" Jan 30 13:23:27.158323 containerd[1790]: time="2025-01-30T13:23:27.158313720Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\"" Jan 30 13:23:27.158414 containerd[1790]: time="2025-01-30T13:23:27.158404475Z" level=info msg="Ensure that sandbox f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6 in task-service has been cleanup successfully" Jan 30 13:23:27.158489 containerd[1790]: time="2025-01-30T13:23:27.158406705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:3,}" Jan 30 13:23:27.158518 containerd[1790]: time="2025-01-30T13:23:27.158493829Z" level=info msg="TearDown network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" successfully" Jan 30 13:23:27.158518 containerd[1790]: time="2025-01-30T13:23:27.158501938Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" returns successfully" Jan 30 13:23:27.158611 containerd[1790]: time="2025-01-30T13:23:27.158601010Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.158643161Z" level=info msg="TearDown network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.158649882Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" returns successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.158784197Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.158824753Z" level=info msg="TearDown network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.158831548Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" returns successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.158996110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:3,}" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159040479Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\"" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159144458Z" level=info msg="Ensure that sandbox 260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632 in task-service has been cleanup successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159216240Z" level=info msg="TearDown network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159223498Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" returns successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159332923Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159373315Z" level=info msg="TearDown network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159379751Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" returns successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159502688Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159538319Z" level=info msg="TearDown network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159543942Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" returns successfully" Jan 30 13:23:27.159806 containerd[1790]: time="2025-01-30T13:23:27.159699226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:23:27.159792 systemd[1]: run-netns-cni\x2d2f7e73d5\x2dd145\x2d136f\x2dd5cd\x2d85d4c3c1e27d.mount: Deactivated successfully. Jan 30 13:23:27.160191 kubelet[3262]: I0130 13:23:27.158823 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632" Jan 30 13:23:27.159845 systemd[1]: run-netns-cni\x2d9590b32b\x2d7bff\x2d2805\x2d52f9\x2da1490d652e40.mount: Deactivated successfully. Jan 30 13:23:27.159878 systemd[1]: run-netns-cni\x2d7cd96c76\x2dc009\x2d0cfa\x2d4ad4\x2d389a2b15eab5.mount: Deactivated successfully. Jan 30 13:23:27.162042 systemd[1]: run-netns-cni\x2dc57d089b\x2d22c6\x2d88d1\x2da5e8\x2d2d8a3472e710.mount: Deactivated successfully. Jan 30 13:23:27.162090 systemd[1]: run-netns-cni\x2d305640de\x2dccdd\x2de811\x2de000\x2d080ed2027522.mount: Deactivated successfully. Jan 30 13:23:27.200653 containerd[1790]: time="2025-01-30T13:23:27.200621147Z" level=error msg="Failed to destroy network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.200860 containerd[1790]: time="2025-01-30T13:23:27.200842482Z" level=error msg="encountered an error cleaning up failed sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.200906 containerd[1790]: time="2025-01-30T13:23:27.200891195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.201063 kubelet[3262]: E0130 13:23:27.201037 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.201112 kubelet[3262]: E0130 13:23:27.201084 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:27.201112 kubelet[3262]: E0130 13:23:27.201099 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:27.201186 kubelet[3262]: E0130 13:23:27.201142 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" podUID="6f232251-0174-4a52-bf07-5a185bd64bf0" Jan 30 13:23:27.205395 containerd[1790]: time="2025-01-30T13:23:27.205361114Z" level=error msg="Failed to destroy network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205484 containerd[1790]: time="2025-01-30T13:23:27.205410522Z" level=error msg="Failed to destroy network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205605 containerd[1790]: time="2025-01-30T13:23:27.205590871Z" level=error msg="encountered an error cleaning up failed sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205628 containerd[1790]: time="2025-01-30T13:23:27.205607312Z" level=error msg="encountered an error cleaning up failed sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205660 containerd[1790]: time="2025-01-30T13:23:27.205633943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205660 containerd[1790]: time="2025-01-30T13:23:27.205642564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205780 kubelet[3262]: E0130 13:23:27.205758 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205813 kubelet[3262]: E0130 13:23:27.205796 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:27.205813 kubelet[3262]: E0130 13:23:27.205809 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:27.205852 kubelet[3262]: E0130 13:23:27.205836 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ckrtn" podUID="e0884c07-f701-4d19-90b6-f7fc2b65a03a" Jan 30 13:23:27.205887 kubelet[3262]: E0130 13:23:27.205758 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.205915 kubelet[3262]: E0130 13:23:27.205905 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:27.205942 kubelet[3262]: E0130 13:23:27.205922 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:27.205979 kubelet[3262]: E0130 13:23:27.205962 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" podUID="8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9" Jan 30 13:23:27.206364 containerd[1790]: time="2025-01-30T13:23:27.206336895Z" level=error msg="Failed to destroy network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.206551 containerd[1790]: time="2025-01-30T13:23:27.206538216Z" level=error msg="encountered an error cleaning up failed sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.206577 containerd[1790]: time="2025-01-30T13:23:27.206566089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.206679 kubelet[3262]: E0130 13:23:27.206661 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.206716 kubelet[3262]: E0130 13:23:27.206691 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:27.206716 kubelet[3262]: E0130 13:23:27.206705 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:27.206769 kubelet[3262]: E0130 13:23:27.206725 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" podUID="b8bd5def-dd66-4c62-af8b-6e8cab979050" Jan 30 13:23:27.207119 containerd[1790]: time="2025-01-30T13:23:27.207103674Z" level=error msg="Failed to destroy network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207254 containerd[1790]: time="2025-01-30T13:23:27.207243099Z" level=error msg="encountered an error cleaning up failed sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207278 containerd[1790]: time="2025-01-30T13:23:27.207268518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207349 kubelet[3262]: E0130 13:23:27.207335 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207389 kubelet[3262]: E0130 13:23:27.207358 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:27.207389 kubelet[3262]: E0130 13:23:27.207377 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:27.207447 kubelet[3262]: E0130 13:23:27.207403 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bqbqv" podUID="24717288-c144-4929-abc3-3991af241c87" Jan 30 13:23:27.207489 containerd[1790]: time="2025-01-30T13:23:27.207425044Z" level=error msg="Failed to destroy network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207565 containerd[1790]: time="2025-01-30T13:23:27.207550539Z" level=error msg="encountered an error cleaning up failed sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207593 containerd[1790]: time="2025-01-30T13:23:27.207573842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207668 kubelet[3262]: E0130 13:23:27.207655 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:27.207693 kubelet[3262]: E0130 13:23:27.207675 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:27.207693 kubelet[3262]: E0130 13:23:27.207685 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:27.207759 kubelet[3262]: E0130 13:23:27.207702 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:28.160666 kubelet[3262]: I0130 13:23:28.160649 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47" Jan 30 13:23:28.160946 containerd[1790]: time="2025-01-30T13:23:28.160929431Z" level=info msg="StopPodSandbox for \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\"" Jan 30 13:23:28.161066 containerd[1790]: time="2025-01-30T13:23:28.161042204Z" level=info msg="Ensure that sandbox 2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47 in task-service has been cleanup successfully" Jan 30 13:23:28.161143 containerd[1790]: time="2025-01-30T13:23:28.161129292Z" level=info msg="TearDown network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" successfully" Jan 30 13:23:28.161143 containerd[1790]: time="2025-01-30T13:23:28.161139424Z" level=info msg="StopPodSandbox for \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" returns successfully" Jan 30 13:23:28.161330 containerd[1790]: time="2025-01-30T13:23:28.161315398Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\"" Jan 30 13:23:28.161364 kubelet[3262]: I0130 13:23:28.161356 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d" Jan 30 13:23:28.161391 containerd[1790]: time="2025-01-30T13:23:28.161358355Z" level=info msg="TearDown network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" successfully" Jan 30 13:23:28.161391 containerd[1790]: time="2025-01-30T13:23:28.161382646Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" returns successfully" Jan 30 13:23:28.161475 containerd[1790]: time="2025-01-30T13:23:28.161466872Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" Jan 30 13:23:28.161516 containerd[1790]: time="2025-01-30T13:23:28.161508117Z" level=info msg="TearDown network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" successfully" Jan 30 13:23:28.161516 containerd[1790]: time="2025-01-30T13:23:28.161514713Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" returns successfully" Jan 30 13:23:28.161583 containerd[1790]: time="2025-01-30T13:23:28.161566685Z" level=info msg="StopPodSandbox for \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\"" Jan 30 13:23:28.161614 containerd[1790]: time="2025-01-30T13:23:28.161602626Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:28.161665 containerd[1790]: time="2025-01-30T13:23:28.161642127Z" level=info msg="TearDown network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" successfully" Jan 30 13:23:28.161665 containerd[1790]: time="2025-01-30T13:23:28.161663061Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" returns successfully" Jan 30 13:23:28.161727 containerd[1790]: time="2025-01-30T13:23:28.161687049Z" level=info msg="Ensure that sandbox 9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d in task-service has been cleanup successfully" Jan 30 13:23:28.161789 containerd[1790]: time="2025-01-30T13:23:28.161778019Z" level=info msg="TearDown network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" successfully" Jan 30 13:23:28.161789 containerd[1790]: time="2025-01-30T13:23:28.161787193Z" level=info msg="StopPodSandbox for \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" returns successfully" Jan 30 13:23:28.161847 containerd[1790]: time="2025-01-30T13:23:28.161829905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:23:28.161897 containerd[1790]: time="2025-01-30T13:23:28.161887607Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\"" Jan 30 13:23:28.161952 containerd[1790]: time="2025-01-30T13:23:28.161926416Z" level=info msg="TearDown network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" successfully" Jan 30 13:23:28.161973 containerd[1790]: time="2025-01-30T13:23:28.161952617Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" returns successfully" Jan 30 13:23:28.162051 containerd[1790]: time="2025-01-30T13:23:28.162043299Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" Jan 30 13:23:28.162088 containerd[1790]: time="2025-01-30T13:23:28.162079622Z" level=info msg="TearDown network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" successfully" Jan 30 13:23:28.162114 containerd[1790]: time="2025-01-30T13:23:28.162088653Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" returns successfully" Jan 30 13:23:28.162201 containerd[1790]: time="2025-01-30T13:23:28.162190588Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:28.162234 kubelet[3262]: I0130 13:23:28.162205 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987" Jan 30 13:23:28.162266 containerd[1790]: time="2025-01-30T13:23:28.162239294Z" level=info msg="TearDown network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" successfully" Jan 30 13:23:28.162266 containerd[1790]: time="2025-01-30T13:23:28.162249543Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" returns successfully" Jan 30 13:23:28.162404 containerd[1790]: time="2025-01-30T13:23:28.162389460Z" level=info msg="StopPodSandbox for \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\"" Jan 30 13:23:28.162451 containerd[1790]: time="2025-01-30T13:23:28.162439330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:4,}" Jan 30 13:23:28.162530 containerd[1790]: time="2025-01-30T13:23:28.162517022Z" level=info msg="Ensure that sandbox 3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987 in task-service has been cleanup successfully" Jan 30 13:23:28.162620 containerd[1790]: time="2025-01-30T13:23:28.162610344Z" level=info msg="TearDown network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" successfully" Jan 30 13:23:28.162620 containerd[1790]: time="2025-01-30T13:23:28.162620084Z" level=info msg="StopPodSandbox for \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" returns successfully" Jan 30 13:23:28.162750 containerd[1790]: time="2025-01-30T13:23:28.162741109Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\"" Jan 30 13:23:28.162800 containerd[1790]: time="2025-01-30T13:23:28.162777989Z" level=info msg="TearDown network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" successfully" Jan 30 13:23:28.162825 containerd[1790]: time="2025-01-30T13:23:28.162800719Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" returns successfully" Jan 30 13:23:28.162837 systemd[1]: run-netns-cni\x2db26008b1\x2df06c\x2d5f7b\x2d8f0e\x2db03546160fdb.mount: Deactivated successfully. Jan 30 13:23:28.162965 containerd[1790]: time="2025-01-30T13:23:28.162923894Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" Jan 30 13:23:28.162989 containerd[1790]: time="2025-01-30T13:23:28.162971708Z" level=info msg="TearDown network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" successfully" Jan 30 13:23:28.162989 containerd[1790]: time="2025-01-30T13:23:28.162978503Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" returns successfully" Jan 30 13:23:28.163082 containerd[1790]: time="2025-01-30T13:23:28.163069370Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:28.163136 containerd[1790]: time="2025-01-30T13:23:28.163125970Z" level=info msg="TearDown network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" successfully" Jan 30 13:23:28.163164 containerd[1790]: time="2025-01-30T13:23:28.163136962Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" returns successfully" Jan 30 13:23:28.163203 kubelet[3262]: I0130 13:23:28.163194 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d" Jan 30 13:23:28.163398 containerd[1790]: time="2025-01-30T13:23:28.163384805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:4,}" Jan 30 13:23:28.163491 containerd[1790]: time="2025-01-30T13:23:28.163473320Z" level=info msg="StopPodSandbox for \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\"" Jan 30 13:23:28.163592 containerd[1790]: time="2025-01-30T13:23:28.163580450Z" level=info msg="Ensure that sandbox a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d in task-service has been cleanup successfully" Jan 30 13:23:28.163679 containerd[1790]: time="2025-01-30T13:23:28.163666723Z" level=info msg="TearDown network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" successfully" Jan 30 13:23:28.163713 containerd[1790]: time="2025-01-30T13:23:28.163677210Z" level=info msg="StopPodSandbox for \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" returns successfully" Jan 30 13:23:28.163793 containerd[1790]: time="2025-01-30T13:23:28.163781158Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\"" Jan 30 13:23:28.163843 containerd[1790]: time="2025-01-30T13:23:28.163832629Z" level=info msg="TearDown network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" successfully" Jan 30 13:23:28.163874 containerd[1790]: time="2025-01-30T13:23:28.163842172Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" returns successfully" Jan 30 13:23:28.163973 containerd[1790]: time="2025-01-30T13:23:28.163960068Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" Jan 30 13:23:28.164019 containerd[1790]: time="2025-01-30T13:23:28.164009558Z" level=info msg="TearDown network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" successfully" Jan 30 13:23:28.164050 containerd[1790]: time="2025-01-30T13:23:28.164019337Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" returns successfully" Jan 30 13:23:28.164120 kubelet[3262]: I0130 13:23:28.164109 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7" Jan 30 13:23:28.164156 containerd[1790]: time="2025-01-30T13:23:28.164122553Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:28.164195 containerd[1790]: time="2025-01-30T13:23:28.164178420Z" level=info msg="TearDown network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" successfully" Jan 30 13:23:28.164226 containerd[1790]: time="2025-01-30T13:23:28.164195386Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" returns successfully" Jan 30 13:23:28.164418 containerd[1790]: time="2025-01-30T13:23:28.164404146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:23:28.164441 containerd[1790]: time="2025-01-30T13:23:28.164422897Z" level=info msg="StopPodSandbox for \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\"" Jan 30 13:23:28.164536 containerd[1790]: time="2025-01-30T13:23:28.164526922Z" level=info msg="Ensure that sandbox 00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7 in task-service has been cleanup successfully" Jan 30 13:23:28.164621 containerd[1790]: time="2025-01-30T13:23:28.164611357Z" level=info msg="TearDown network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" successfully" Jan 30 13:23:28.164639 containerd[1790]: time="2025-01-30T13:23:28.164622906Z" level=info msg="StopPodSandbox for \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" returns successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.164754307Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\"" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.164806274Z" level=info msg="TearDown network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.164814331Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" returns successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.164933862Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.164984619Z" level=info msg="TearDown network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.164994737Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" returns successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165097306Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165147254Z" level=info msg="StopPodSandbox for \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\"" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165167081Z" level=info msg="TearDown network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165174383Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" returns successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165248776Z" level=info msg="Ensure that sandbox 66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661 in task-service has been cleanup successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165327253Z" level=info msg="TearDown network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165334449Z" level=info msg="StopPodSandbox for \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" returns successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165356228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:4,}" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165442633Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\"" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165481876Z" level=info msg="TearDown network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165504432Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" returns successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165616963Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165652658Z" level=info msg="TearDown network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165661254Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" returns successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165773596Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165829206Z" level=info msg="TearDown network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" successfully" Jan 30 13:23:28.165848 containerd[1790]: time="2025-01-30T13:23:28.165838806Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" returns successfully" Jan 30 13:23:28.165029 systemd[1]: run-netns-cni\x2d682f9794\x2dbcb5\x2dedb2\x2dbbdd\x2d03ee8bd446fa.mount: Deactivated successfully. Jan 30 13:23:28.166304 kubelet[3262]: I0130 13:23:28.164931 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661" Jan 30 13:23:28.166351 containerd[1790]: time="2025-01-30T13:23:28.166014515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:4,}" Jan 30 13:23:28.165085 systemd[1]: run-netns-cni\x2d10447933\x2dce25\x2d4f22\x2d9a06\x2d3375b836bf52.mount: Deactivated successfully. Jan 30 13:23:28.167100 systemd[1]: run-netns-cni\x2d2c3bb4ec\x2d463b\x2d266a\x2dc57a\x2d0cc278ae07b9.mount: Deactivated successfully. Jan 30 13:23:28.167145 systemd[1]: run-netns-cni\x2dbfdc63dd\x2d3a00\x2d8203\x2de68d\x2d65434690afa6.mount: Deactivated successfully. Jan 30 13:23:28.167179 systemd[1]: run-netns-cni\x2df63e84ad\x2d60fc\x2dddd8\x2dc966\x2d667e78e8692d.mount: Deactivated successfully. Jan 30 13:23:28.207738 containerd[1790]: time="2025-01-30T13:23:28.207692626Z" level=error msg="Failed to destroy network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.207964 containerd[1790]: time="2025-01-30T13:23:28.207943726Z" level=error msg="encountered an error cleaning up failed sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.208024 containerd[1790]: time="2025-01-30T13:23:28.208008291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.208322 kubelet[3262]: E0130 13:23:28.208291 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.208375 kubelet[3262]: E0130 13:23:28.208344 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:28.208375 kubelet[3262]: E0130 13:23:28.208360 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" Jan 30 13:23:28.208425 kubelet[3262]: E0130 13:23:28.208392 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-226tt_calico-apiserver(8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" podUID="8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9" Jan 30 13:23:28.214534 containerd[1790]: time="2025-01-30T13:23:28.214503886Z" level=error msg="Failed to destroy network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.214753 containerd[1790]: time="2025-01-30T13:23:28.214728158Z" level=error msg="Failed to destroy network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.214953 containerd[1790]: time="2025-01-30T13:23:28.214742335Z" level=error msg="encountered an error cleaning up failed sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.214953 containerd[1790]: time="2025-01-30T13:23:28.214841259Z" level=error msg="Failed to destroy network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.214953 containerd[1790]: time="2025-01-30T13:23:28.214882207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.214953 containerd[1790]: time="2025-01-30T13:23:28.214930622Z" level=error msg="encountered an error cleaning up failed sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215088 containerd[1790]: time="2025-01-30T13:23:28.214962060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215088 containerd[1790]: time="2025-01-30T13:23:28.215039966Z" level=error msg="encountered an error cleaning up failed sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215088 containerd[1790]: time="2025-01-30T13:23:28.215070430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215190 kubelet[3262]: E0130 13:23:28.215021 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215190 kubelet[3262]: E0130 13:23:28.215040 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215190 kubelet[3262]: E0130 13:23:28.215063 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:28.215190 kubelet[3262]: E0130 13:23:28.215077 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bqbqv" Jan 30 13:23:28.215327 kubelet[3262]: E0130 13:23:28.215101 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bqbqv_kube-system(24717288-c144-4929-abc3-3991af241c87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bqbqv" podUID="24717288-c144-4929-abc3-3991af241c87" Jan 30 13:23:28.215327 kubelet[3262]: E0130 13:23:28.215063 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:28.215327 kubelet[3262]: E0130 13:23:28.215136 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" Jan 30 13:23:28.215431 containerd[1790]: time="2025-01-30T13:23:28.215044782Z" level=error msg="Failed to destroy network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215431 containerd[1790]: time="2025-01-30T13:23:28.215373656Z" level=error msg="encountered an error cleaning up failed sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215431 containerd[1790]: time="2025-01-30T13:23:28.215417383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215529 kubelet[3262]: E0130 13:23:28.215164 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b9c4bc7cd-rnfv6_calico-system(6f232251-0174-4a52-bf07-5a185bd64bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" podUID="6f232251-0174-4a52-bf07-5a185bd64bf0" Jan 30 13:23:28.215529 kubelet[3262]: E0130 13:23:28.215296 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.215529 kubelet[3262]: E0130 13:23:28.215317 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:28.215633 kubelet[3262]: E0130 13:23:28.215332 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ckrtn" Jan 30 13:23:28.215633 kubelet[3262]: E0130 13:23:28.215361 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ckrtn_kube-system(e0884c07-f701-4d19-90b6-f7fc2b65a03a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ckrtn" podUID="e0884c07-f701-4d19-90b6-f7fc2b65a03a" Jan 30 13:23:28.216206 kubelet[3262]: E0130 13:23:28.216142 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.216206 kubelet[3262]: E0130 13:23:28.216169 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:28.216206 kubelet[3262]: E0130 13:23:28.216182 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" Jan 30 13:23:28.216282 kubelet[3262]: E0130 13:23:28.216205 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f78c5dd9-9rtwp_calico-apiserver(b8bd5def-dd66-4c62-af8b-6e8cab979050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" podUID="b8bd5def-dd66-4c62-af8b-6e8cab979050" Jan 30 13:23:28.216673 containerd[1790]: time="2025-01-30T13:23:28.216657876Z" level=error msg="Failed to destroy network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.216802 containerd[1790]: time="2025-01-30T13:23:28.216790702Z" level=error msg="encountered an error cleaning up failed sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.216827 containerd[1790]: time="2025-01-30T13:23:28.216816601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.216895 kubelet[3262]: E0130 13:23:28.216884 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:23:28.216915 kubelet[3262]: E0130 13:23:28.216903 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:28.216936 kubelet[3262]: E0130 13:23:28.216915 3262 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdxpb" Jan 30 13:23:28.216954 kubelet[3262]: E0130 13:23:28.216934 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdxpb_calico-system(b216c652-2303-4f3f-bb63-04ccb5f59378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdxpb" podUID="b216c652-2303-4f3f-bb63-04ccb5f59378" Jan 30 13:23:28.299486 containerd[1790]: time="2025-01-30T13:23:28.299460593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:28.299698 containerd[1790]: time="2025-01-30T13:23:28.299678448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:23:28.300016 containerd[1790]: time="2025-01-30T13:23:28.300004662Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:28.300928 containerd[1790]: time="2025-01-30T13:23:28.300916890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:28.301555 containerd[1790]: time="2025-01-30T13:23:28.301545124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.154243513s" Jan 30 13:23:28.301574 containerd[1790]: time="2025-01-30T13:23:28.301559015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:23:28.305009 containerd[1790]: time="2025-01-30T13:23:28.304993627Z" level=info msg="CreateContainer within sandbox \"55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:23:28.313598 containerd[1790]: time="2025-01-30T13:23:28.313554519Z" level=info msg="CreateContainer within sandbox \"55ceb8c79e2a32971aa2f3cf37b611a10cce727291e6579a2e84b596ebfe30a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d0be4e5ce94eceaa00b7e2f0aa9781aa71b935d4c5780bbd63d6f63efd22ed56\"" Jan 30 13:23:28.313844 containerd[1790]: time="2025-01-30T13:23:28.313797138Z" level=info msg="StartContainer for \"d0be4e5ce94eceaa00b7e2f0aa9781aa71b935d4c5780bbd63d6f63efd22ed56\"" Jan 30 13:23:28.333743 systemd[1]: Started cri-containerd-d0be4e5ce94eceaa00b7e2f0aa9781aa71b935d4c5780bbd63d6f63efd22ed56.scope - libcontainer container d0be4e5ce94eceaa00b7e2f0aa9781aa71b935d4c5780bbd63d6f63efd22ed56. Jan 30 13:23:28.362549 containerd[1790]: time="2025-01-30T13:23:28.362464330Z" level=info msg="StartContainer for \"d0be4e5ce94eceaa00b7e2f0aa9781aa71b935d4c5780bbd63d6f63efd22ed56\" returns successfully" Jan 30 13:23:28.382936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d-shm.mount: Deactivated successfully. Jan 30 13:23:28.383058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172288124.mount: Deactivated successfully. Jan 30 13:23:28.435639 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:23:28.435700 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:23:29.177062 kubelet[3262]: I0130 13:23:29.177008 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2" Jan 30 13:23:29.178172 containerd[1790]: time="2025-01-30T13:23:29.178110082Z" level=info msg="StopPodSandbox for \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\"" Jan 30 13:23:29.179083 containerd[1790]: time="2025-01-30T13:23:29.178654234Z" level=info msg="Ensure that sandbox 0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2 in task-service has been cleanup successfully" Jan 30 13:23:29.179212 containerd[1790]: time="2025-01-30T13:23:29.179083933Z" level=info msg="TearDown network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\" successfully" Jan 30 13:23:29.179212 containerd[1790]: time="2025-01-30T13:23:29.179139250Z" level=info msg="StopPodSandbox for \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\" returns successfully" Jan 30 13:23:29.179870 containerd[1790]: time="2025-01-30T13:23:29.179814200Z" level=info msg="StopPodSandbox for \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\"" Jan 30 13:23:29.180046 containerd[1790]: time="2025-01-30T13:23:29.180011705Z" level=info msg="TearDown network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" successfully" Jan 30 13:23:29.180190 containerd[1790]: time="2025-01-30T13:23:29.180046624Z" level=info msg="StopPodSandbox for \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" returns successfully" Jan 30 13:23:29.180836 containerd[1790]: time="2025-01-30T13:23:29.180759106Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\"" Jan 30 13:23:29.181146 containerd[1790]: time="2025-01-30T13:23:29.180992638Z" level=info msg="TearDown network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" successfully" Jan 30 13:23:29.181146 containerd[1790]: time="2025-01-30T13:23:29.181136334Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" returns successfully" Jan 30 13:23:29.181703 kubelet[3262]: I0130 13:23:29.181651 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5" Jan 30 13:23:29.181885 containerd[1790]: time="2025-01-30T13:23:29.181699569Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" Jan 30 13:23:29.182023 containerd[1790]: time="2025-01-30T13:23:29.181964075Z" level=info msg="TearDown network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" successfully" Jan 30 13:23:29.182156 containerd[1790]: time="2025-01-30T13:23:29.182018124Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" returns successfully" Jan 30 13:23:29.182892 containerd[1790]: time="2025-01-30T13:23:29.182809900Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:29.183104 containerd[1790]: time="2025-01-30T13:23:29.182867734Z" level=info msg="StopPodSandbox for \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\"" Jan 30 13:23:29.183310 containerd[1790]: time="2025-01-30T13:23:29.183107676Z" level=info msg="TearDown network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" successfully" Jan 30 13:23:29.183310 containerd[1790]: time="2025-01-30T13:23:29.183158407Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" returns successfully" Jan 30 13:23:29.183673 containerd[1790]: time="2025-01-30T13:23:29.183542671Z" level=info msg="Ensure that sandbox 3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5 in task-service has been cleanup successfully" Jan 30 13:23:29.183920 containerd[1790]: time="2025-01-30T13:23:29.183907303Z" level=info msg="TearDown network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\" successfully" Jan 30 13:23:29.183969 containerd[1790]: time="2025-01-30T13:23:29.183918928Z" level=info msg="StopPodSandbox for \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\" returns successfully" Jan 30 13:23:29.183969 containerd[1790]: time="2025-01-30T13:23:29.183958054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:5,}" Jan 30 13:23:29.184066 containerd[1790]: time="2025-01-30T13:23:29.184052033Z" level=info msg="StopPodSandbox for \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\"" Jan 30 13:23:29.184107 containerd[1790]: time="2025-01-30T13:23:29.184099272Z" level=info msg="TearDown network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" successfully" Jan 30 13:23:29.184128 containerd[1790]: time="2025-01-30T13:23:29.184106917Z" level=info msg="StopPodSandbox for \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" returns successfully" Jan 30 13:23:29.184256 containerd[1790]: time="2025-01-30T13:23:29.184246011Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\"" Jan 30 13:23:29.184303 containerd[1790]: time="2025-01-30T13:23:29.184293316Z" level=info msg="TearDown network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" successfully" Jan 30 13:23:29.184336 containerd[1790]: time="2025-01-30T13:23:29.184301903Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" returns successfully" Jan 30 13:23:29.184424 containerd[1790]: time="2025-01-30T13:23:29.184413850Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" Jan 30 13:23:29.184467 kubelet[3262]: I0130 13:23:29.184415 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520" Jan 30 13:23:29.184508 containerd[1790]: time="2025-01-30T13:23:29.184462910Z" level=info msg="TearDown network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" successfully" Jan 30 13:23:29.184508 containerd[1790]: time="2025-01-30T13:23:29.184472350Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" returns successfully" Jan 30 13:23:29.184614 containerd[1790]: time="2025-01-30T13:23:29.184603359Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:29.184656 containerd[1790]: time="2025-01-30T13:23:29.184622209Z" level=info msg="StopPodSandbox for \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\"" Jan 30 13:23:29.184656 containerd[1790]: time="2025-01-30T13:23:29.184639754Z" level=info msg="TearDown network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" successfully" Jan 30 13:23:29.184656 containerd[1790]: time="2025-01-30T13:23:29.184647668Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" returns successfully" Jan 30 13:23:29.184748 containerd[1790]: time="2025-01-30T13:23:29.184728580Z" level=info msg="Ensure that sandbox 5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520 in task-service has been cleanup successfully" Jan 30 13:23:29.184815 containerd[1790]: time="2025-01-30T13:23:29.184802938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:5,}" Jan 30 13:23:29.184812 systemd[1]: run-netns-cni\x2d64f15bf6\x2d1413\x2d3215\x2d267f\x2d83b2d2cc918f.mount: Deactivated successfully. Jan 30 13:23:29.185006 containerd[1790]: time="2025-01-30T13:23:29.184807546Z" level=info msg="TearDown network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\" successfully" Jan 30 13:23:29.185006 containerd[1790]: time="2025-01-30T13:23:29.184879048Z" level=info msg="StopPodSandbox for \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\" returns successfully" Jan 30 13:23:29.185006 containerd[1790]: time="2025-01-30T13:23:29.184970016Z" level=info msg="StopPodSandbox for \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\"" Jan 30 13:23:29.185095 containerd[1790]: time="2025-01-30T13:23:29.185014728Z" level=info msg="TearDown network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" successfully" Jan 30 13:23:29.185095 containerd[1790]: time="2025-01-30T13:23:29.185022174Z" level=info msg="StopPodSandbox for \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" returns successfully" Jan 30 13:23:29.185148 containerd[1790]: time="2025-01-30T13:23:29.185137467Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\"" Jan 30 13:23:29.185201 containerd[1790]: time="2025-01-30T13:23:29.185190469Z" level=info msg="TearDown network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" successfully" Jan 30 13:23:29.185231 containerd[1790]: time="2025-01-30T13:23:29.185200751Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" returns successfully" Jan 30 13:23:29.185317 containerd[1790]: time="2025-01-30T13:23:29.185305080Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" Jan 30 13:23:29.185363 containerd[1790]: time="2025-01-30T13:23:29.185353405Z" level=info msg="TearDown network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" successfully" Jan 30 13:23:29.185393 containerd[1790]: time="2025-01-30T13:23:29.185363107Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" returns successfully" Jan 30 13:23:29.185450 kubelet[3262]: I0130 13:23:29.185441 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d" Jan 30 13:23:29.185507 containerd[1790]: time="2025-01-30T13:23:29.185497895Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:29.185547 containerd[1790]: time="2025-01-30T13:23:29.185537963Z" level=info msg="TearDown network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" successfully" Jan 30 13:23:29.185583 containerd[1790]: time="2025-01-30T13:23:29.185547044Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" returns successfully" Jan 30 13:23:29.185677 containerd[1790]: time="2025-01-30T13:23:29.185665776Z" level=info msg="StopPodSandbox for \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\"" Jan 30 13:23:29.185806 containerd[1790]: time="2025-01-30T13:23:29.185794808Z" level=info msg="Ensure that sandbox 51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d in task-service has been cleanup successfully" Jan 30 13:23:29.185880 containerd[1790]: time="2025-01-30T13:23:29.185798091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:5,}" Jan 30 13:23:29.185911 containerd[1790]: time="2025-01-30T13:23:29.185878784Z" level=info msg="TearDown network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\" successfully" Jan 30 13:23:29.185911 containerd[1790]: time="2025-01-30T13:23:29.185886692Z" level=info msg="StopPodSandbox for \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\" returns successfully" Jan 30 13:23:29.186032 containerd[1790]: time="2025-01-30T13:23:29.186022529Z" level=info msg="StopPodSandbox for \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\"" Jan 30 13:23:29.186072 containerd[1790]: time="2025-01-30T13:23:29.186064384Z" level=info msg="TearDown network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" successfully" Jan 30 13:23:29.186101 containerd[1790]: time="2025-01-30T13:23:29.186073309Z" level=info msg="StopPodSandbox for \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" returns successfully" Jan 30 13:23:29.186184 containerd[1790]: time="2025-01-30T13:23:29.186170216Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\"" Jan 30 13:23:29.186225 containerd[1790]: time="2025-01-30T13:23:29.186218875Z" level=info msg="TearDown network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" successfully" Jan 30 13:23:29.186254 containerd[1790]: time="2025-01-30T13:23:29.186225887Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" returns successfully" Jan 30 13:23:29.186330 containerd[1790]: time="2025-01-30T13:23:29.186320798Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" Jan 30 13:23:29.186364 kubelet[3262]: I0130 13:23:29.186340 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002" Jan 30 13:23:29.186398 containerd[1790]: time="2025-01-30T13:23:29.186362575Z" level=info msg="TearDown network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" successfully" Jan 30 13:23:29.186398 containerd[1790]: time="2025-01-30T13:23:29.186368850Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" returns successfully" Jan 30 13:23:29.186507 containerd[1790]: time="2025-01-30T13:23:29.186494830Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:29.186560 containerd[1790]: time="2025-01-30T13:23:29.186548783Z" level=info msg="TearDown network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" successfully" Jan 30 13:23:29.186594 containerd[1790]: time="2025-01-30T13:23:29.186559345Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" returns successfully" Jan 30 13:23:29.186657 containerd[1790]: time="2025-01-30T13:23:29.186638668Z" level=info msg="StopPodSandbox for \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\"" Jan 30 13:23:29.186753 containerd[1790]: time="2025-01-30T13:23:29.186738319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:5,}" Jan 30 13:23:29.186822 containerd[1790]: time="2025-01-30T13:23:29.186739734Z" level=info msg="Ensure that sandbox 205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002 in task-service has been cleanup successfully" Jan 30 13:23:29.186907 containerd[1790]: time="2025-01-30T13:23:29.186895091Z" level=info msg="TearDown network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\" successfully" Jan 30 13:23:29.186907 containerd[1790]: time="2025-01-30T13:23:29.186905684Z" level=info msg="StopPodSandbox for \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\" returns successfully" Jan 30 13:23:29.187029 containerd[1790]: time="2025-01-30T13:23:29.187016995Z" level=info msg="StopPodSandbox for \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\"" Jan 30 13:23:29.187079 containerd[1790]: time="2025-01-30T13:23:29.187068294Z" level=info msg="TearDown network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" successfully" Jan 30 13:23:29.187110 containerd[1790]: time="2025-01-30T13:23:29.187078769Z" level=info msg="StopPodSandbox for \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" returns successfully" Jan 30 13:23:29.187094 systemd[1]: run-netns-cni\x2d98d7e134\x2d9fd8\x2d0d2c\x2dac4d\x2dda89893a347b.mount: Deactivated successfully. Jan 30 13:23:29.187162 systemd[1]: run-netns-cni\x2d6d9794c2\x2d3f9e\x2d0583\x2d6c71\x2d8bc4b7c89e74.mount: Deactivated successfully. Jan 30 13:23:29.187221 containerd[1790]: time="2025-01-30T13:23:29.187201513Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\"" Jan 30 13:23:29.187263 containerd[1790]: time="2025-01-30T13:23:29.187253822Z" level=info msg="TearDown network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" successfully" Jan 30 13:23:29.187263 containerd[1790]: time="2025-01-30T13:23:29.187261833Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" returns successfully" Jan 30 13:23:29.187321 kubelet[3262]: I0130 13:23:29.187268 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db" Jan 30 13:23:29.187423 containerd[1790]: time="2025-01-30T13:23:29.187411653Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" Jan 30 13:23:29.187476 containerd[1790]: time="2025-01-30T13:23:29.187467254Z" level=info msg="StopPodSandbox for \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\"" Jan 30 13:23:29.187504 containerd[1790]: time="2025-01-30T13:23:29.187476080Z" level=info msg="TearDown network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" successfully" Jan 30 13:23:29.187504 containerd[1790]: time="2025-01-30T13:23:29.187491314Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" returns successfully" Jan 30 13:23:29.187583 containerd[1790]: time="2025-01-30T13:23:29.187573605Z" level=info msg="Ensure that sandbox c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db in task-service has been cleanup successfully" Jan 30 13:23:29.187647 containerd[1790]: time="2025-01-30T13:23:29.187637472Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:29.187669 containerd[1790]: time="2025-01-30T13:23:29.187656493Z" level=info msg="TearDown network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\" successfully" Jan 30 13:23:29.187669 containerd[1790]: time="2025-01-30T13:23:29.187665940Z" level=info msg="StopPodSandbox for \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\" returns successfully" Jan 30 13:23:29.187716 containerd[1790]: time="2025-01-30T13:23:29.187676353Z" level=info msg="TearDown network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" successfully" Jan 30 13:23:29.187716 containerd[1790]: time="2025-01-30T13:23:29.187682726Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" returns successfully" Jan 30 13:23:29.187792 containerd[1790]: time="2025-01-30T13:23:29.187781233Z" level=info msg="StopPodSandbox for \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\"" Jan 30 13:23:29.187830 containerd[1790]: time="2025-01-30T13:23:29.187822495Z" level=info msg="TearDown network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" successfully" Jan 30 13:23:29.187849 containerd[1790]: time="2025-01-30T13:23:29.187829719Z" level=info msg="StopPodSandbox for \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" returns successfully" Jan 30 13:23:29.187849 containerd[1790]: time="2025-01-30T13:23:29.187829880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:5,}" Jan 30 13:23:29.187956 containerd[1790]: time="2025-01-30T13:23:29.187944229Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\"" Jan 30 13:23:29.187989 containerd[1790]: time="2025-01-30T13:23:29.187983014Z" level=info msg="TearDown network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" successfully" Jan 30 13:23:29.188014 containerd[1790]: time="2025-01-30T13:23:29.187989709Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" returns successfully" Jan 30 13:23:29.188104 containerd[1790]: time="2025-01-30T13:23:29.188094253Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" Jan 30 13:23:29.188138 containerd[1790]: time="2025-01-30T13:23:29.188132139Z" level=info msg="TearDown network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" successfully" Jan 30 13:23:29.188158 containerd[1790]: time="2025-01-30T13:23:29.188138554Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" returns successfully" Jan 30 13:23:29.188255 containerd[1790]: time="2025-01-30T13:23:29.188243010Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:29.188293 containerd[1790]: time="2025-01-30T13:23:29.188285447Z" level=info msg="TearDown network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" successfully" Jan 30 13:23:29.188293 containerd[1790]: time="2025-01-30T13:23:29.188292645Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" returns successfully" Jan 30 13:23:29.188498 containerd[1790]: time="2025-01-30T13:23:29.188476069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:5,}" Jan 30 13:23:29.188901 kubelet[3262]: I0130 13:23:29.188877 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j2qfm" podStartSLOduration=2.050109188 podStartE2EDuration="12.188867322s" podCreationTimestamp="2025-01-30 13:23:17 +0000 UTC" firstStartedPulling="2025-01-30 13:23:18.163124784 +0000 UTC m=+20.128545320" lastFinishedPulling="2025-01-30 13:23:28.301882924 +0000 UTC m=+30.267303454" observedRunningTime="2025-01-30 13:23:29.188716753 +0000 UTC m=+31.154137286" watchObservedRunningTime="2025-01-30 13:23:29.188867322 +0000 UTC m=+31.154287850" Jan 30 13:23:29.189465 systemd[1]: run-netns-cni\x2dd5c69590\x2dca96\x2d39ad\x2d49b3\x2dd95ca5081ebf.mount: Deactivated successfully. Jan 30 13:23:29.189534 systemd[1]: run-netns-cni\x2db00ed295\x2d66ee\x2dac02\x2d3342\x2d0c5d62696d26.mount: Deactivated successfully. Jan 30 13:23:29.189595 systemd[1]: run-netns-cni\x2d365ebf5a\x2df001\x2d2a88\x2d753c\x2d9240e5003465.mount: Deactivated successfully. Jan 30 13:23:29.284031 systemd-networkd[1702]: cali0dc998ba2d7: Link UP Jan 30 13:23:29.284135 systemd-networkd[1702]: cali0dc998ba2d7: Gained carrier Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.225 [INFO][5630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.232 [INFO][5630] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0 calico-apiserver-57f78c5dd9- calico-apiserver b8bd5def-dd66-4c62-af8b-6e8cab979050 657 0 2025-01-30 13:23:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f78c5dd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-9d6a1ac7ae calico-apiserver-57f78c5dd9-9rtwp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0dc998ba2d7 [] []}} ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.232 [INFO][5630] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.248 [INFO][5748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" HandleID="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.254 [INFO][5748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" HandleID="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000435b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-9d6a1ac7ae", "pod":"calico-apiserver-57f78c5dd9-9rtwp", "timestamp":"2025-01-30 13:23:29.248962477 +0000 UTC"}, Hostname:"ci-4186.1.0-a-9d6a1ac7ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.254 [INFO][5748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.254 [INFO][5748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.254 [INFO][5748] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-9d6a1ac7ae' Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.256 [INFO][5748] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.258 [INFO][5748] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.260 [INFO][5748] ipam/ipam.go 489: Trying affinity for 192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.261 [INFO][5748] ipam/ipam.go 155: Attempting to load block cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.262 [INFO][5748] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.262 [INFO][5748] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.263 [INFO][5748] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9 Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.271 [INFO][5748] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.274 [INFO][5748] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.38.65/26] block=192.168.38.64/26 handle="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.274 [INFO][5748] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.65/26] handle="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.274 [INFO][5748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:23:29.288880 containerd[1790]: 2025-01-30 13:23:29.274 [INFO][5748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.65/26] IPv6=[] ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" HandleID="k8s-pod-network.19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" Jan 30 13:23:29.289397 containerd[1790]: 2025-01-30 13:23:29.276 [INFO][5630] cni-plugin/k8s.go 386: Populated endpoint ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0", GenerateName:"calico-apiserver-57f78c5dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8bd5def-dd66-4c62-af8b-6e8cab979050", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f78c5dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"", Pod:"calico-apiserver-57f78c5dd9-9rtwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dc998ba2d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.289397 containerd[1790]: 2025-01-30 13:23:29.276 [INFO][5630] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.38.65/32] ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" Jan 30 13:23:29.289397 containerd[1790]: 2025-01-30 13:23:29.276 [INFO][5630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dc998ba2d7 ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" Jan 30 13:23:29.289397 containerd[1790]: 2025-01-30 13:23:29.284 [INFO][5630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" Jan 30 13:23:29.289397 containerd[1790]: 2025-01-30 13:23:29.284 [INFO][5630] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0", GenerateName:"calico-apiserver-57f78c5dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8bd5def-dd66-4c62-af8b-6e8cab979050", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f78c5dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9", Pod:"calico-apiserver-57f78c5dd9-9rtwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dc998ba2d7", MAC:"ea:5a:2f:9e:8d:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.289397 containerd[1790]: 2025-01-30 13:23:29.288 [INFO][5630] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-9rtwp" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--9rtwp-eth0" Jan 30 13:23:29.289617 systemd-networkd[1702]: cali520923d0d18: Link UP Jan 30 13:23:29.289761 systemd-networkd[1702]: cali520923d0d18: Gained carrier Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.231 [INFO][5665] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.236 [INFO][5665] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0 calico-apiserver-57f78c5dd9- calico-apiserver 8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9 659 0 2025-01-30 13:23:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f78c5dd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-9d6a1ac7ae calico-apiserver-57f78c5dd9-226tt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali520923d0d18 [] []}} ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.236 [INFO][5665] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.251 [INFO][5758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" HandleID="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.257 [INFO][5758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" HandleID="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-9d6a1ac7ae", "pod":"calico-apiserver-57f78c5dd9-226tt", "timestamp":"2025-01-30 13:23:29.251031311 +0000 UTC"}, Hostname:"ci-4186.1.0-a-9d6a1ac7ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.257 [INFO][5758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.274 [INFO][5758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.274 [INFO][5758] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-9d6a1ac7ae' Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.275 [INFO][5758] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.278 [INFO][5758] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.280 [INFO][5758] ipam/ipam.go 489: Trying affinity for 192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.281 [INFO][5758] ipam/ipam.go 155: Attempting to load block cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.282 [INFO][5758] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.282 [INFO][5758] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.283 [INFO][5758] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4 Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.285 [INFO][5758] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.287 [INFO][5758] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.38.66/26] block=192.168.38.64/26 handle="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.287 [INFO][5758] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.66/26] handle="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.287 [INFO][5758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:23:29.294236 containerd[1790]: 2025-01-30 13:23:29.287 [INFO][5758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.66/26] IPv6=[] ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" HandleID="k8s-pod-network.65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" Jan 30 13:23:29.294668 containerd[1790]: 2025-01-30 13:23:29.288 [INFO][5665] cni-plugin/k8s.go 386: Populated endpoint ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0", GenerateName:"calico-apiserver-57f78c5dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f78c5dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"", Pod:"calico-apiserver-57f78c5dd9-226tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali520923d0d18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.294668 containerd[1790]: 2025-01-30 13:23:29.288 [INFO][5665] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.38.66/32] ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" Jan 30 13:23:29.294668 containerd[1790]: 2025-01-30 13:23:29.288 [INFO][5665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali520923d0d18 ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" Jan 30 13:23:29.294668 containerd[1790]: 2025-01-30 13:23:29.289 [INFO][5665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" Jan 30 13:23:29.294668 containerd[1790]: 2025-01-30 13:23:29.289 [INFO][5665] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0", GenerateName:"calico-apiserver-57f78c5dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f78c5dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4", Pod:"calico-apiserver-57f78c5dd9-226tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali520923d0d18", MAC:"56:9e:12:5c:a0:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.294668 containerd[1790]: 2025-01-30 13:23:29.293 [INFO][5665] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4" Namespace="calico-apiserver" Pod="calico-apiserver-57f78c5dd9-226tt" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--apiserver--57f78c5dd9--226tt-eth0" Jan 30 13:23:29.299167 containerd[1790]: time="2025-01-30T13:23:29.299126318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:29.299167 containerd[1790]: time="2025-01-30T13:23:29.299157103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:29.299167 containerd[1790]: time="2025-01-30T13:23:29.299164002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.299303 containerd[1790]: time="2025-01-30T13:23:29.299209229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.303911 containerd[1790]: time="2025-01-30T13:23:29.303846678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:29.304141 containerd[1790]: time="2025-01-30T13:23:29.304116391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:29.304141 containerd[1790]: time="2025-01-30T13:23:29.304132214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.304215 containerd[1790]: time="2025-01-30T13:23:29.304185437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.304299 systemd-networkd[1702]: califbe26cff229: Link UP Jan 30 13:23:29.304419 systemd-networkd[1702]: califbe26cff229: Gained carrier Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.229 [INFO][5640] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.235 [INFO][5640] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0 calico-kube-controllers-6b9c4bc7cd- calico-system 6f232251-0174-4a52-bf07-5a185bd64bf0 656 0 2025-01-30 13:23:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b9c4bc7cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.0-a-9d6a1ac7ae calico-kube-controllers-6b9c4bc7cd-rnfv6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califbe26cff229 [] []}} ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.235 [INFO][5640] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.251 [INFO][5753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" HandleID="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.258 [INFO][5753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" HandleID="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033aab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-9d6a1ac7ae", "pod":"calico-kube-controllers-6b9c4bc7cd-rnfv6", "timestamp":"2025-01-30 13:23:29.251945959 +0000 UTC"}, Hostname:"ci-4186.1.0-a-9d6a1ac7ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.258 [INFO][5753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.287 [INFO][5753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.288 [INFO][5753] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-9d6a1ac7ae' Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.289 [INFO][5753] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.291 [INFO][5753] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.294 [INFO][5753] ipam/ipam.go 489: Trying affinity for 192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.295 [INFO][5753] ipam/ipam.go 155: Attempting to load block cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.296 [INFO][5753] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.296 [INFO][5753] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.297 [INFO][5753] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.299 [INFO][5753] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.302 [INFO][5753] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.38.67/26] block=192.168.38.64/26 handle="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.302 [INFO][5753] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.67/26] handle="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.302 [INFO][5753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:23:29.310886 containerd[1790]: 2025-01-30 13:23:29.302 [INFO][5753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.67/26] IPv6=[] ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" HandleID="k8s-pod-network.64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" Jan 30 13:23:29.311294 containerd[1790]: 2025-01-30 13:23:29.303 [INFO][5640] cni-plugin/k8s.go 386: Populated endpoint ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0", GenerateName:"calico-kube-controllers-6b9c4bc7cd-", Namespace:"calico-system", SelfLink:"", UID:"6f232251-0174-4a52-bf07-5a185bd64bf0", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9c4bc7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"", Pod:"calico-kube-controllers-6b9c4bc7cd-rnfv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califbe26cff229", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.311294 containerd[1790]: 2025-01-30 13:23:29.303 [INFO][5640] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.38.67/32] ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" Jan 30 13:23:29.311294 containerd[1790]: 2025-01-30 13:23:29.303 [INFO][5640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbe26cff229 ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" Jan 30 13:23:29.311294 containerd[1790]: 2025-01-30 13:23:29.304 [INFO][5640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" Jan 30 13:23:29.311294 containerd[1790]: 2025-01-30 13:23:29.304 [INFO][5640] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0", GenerateName:"calico-kube-controllers-6b9c4bc7cd-", Namespace:"calico-system", SelfLink:"", UID:"6f232251-0174-4a52-bf07-5a185bd64bf0", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9c4bc7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b", Pod:"calico-kube-controllers-6b9c4bc7cd-rnfv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califbe26cff229", MAC:"06:38:02:f5:a1:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.311294 containerd[1790]: 2025-01-30 13:23:29.310 [INFO][5640] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b" Namespace="calico-system" Pod="calico-kube-controllers-6b9c4bc7cd-rnfv6" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-calico--kube--controllers--6b9c4bc7cd--rnfv6-eth0" Jan 30 13:23:29.317627 systemd[1]: Started cri-containerd-19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9.scope - libcontainer container 19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9. Jan 30 13:23:29.319927 systemd-networkd[1702]: caliaa019ab07c8: Link UP Jan 30 13:23:29.320033 systemd-networkd[1702]: caliaa019ab07c8: Gained carrier Jan 30 13:23:29.320164 systemd[1]: Started cri-containerd-65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4.scope - libcontainer container 65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4. Jan 30 13:23:29.320804 containerd[1790]: time="2025-01-30T13:23:29.320541072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:29.320867 containerd[1790]: time="2025-01-30T13:23:29.320828996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:29.320867 containerd[1790]: time="2025-01-30T13:23:29.320843073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.320910 containerd[1790]: time="2025-01-30T13:23:29.320896548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.231 [INFO][5651] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.237 [INFO][5651] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0 csi-node-driver- calico-system b216c652-2303-4f3f-bb63-04ccb5f59378 591 0 2025-01-30 13:23:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.0-a-9d6a1ac7ae csi-node-driver-fdxpb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaa019ab07c8 [] []}} ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.237 [INFO][5651] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.254 [INFO][5759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" HandleID="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.258 [INFO][5759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" HandleID="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005022a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-9d6a1ac7ae", "pod":"csi-node-driver-fdxpb", "timestamp":"2025-01-30 13:23:29.254331154 +0000 UTC"}, Hostname:"ci-4186.1.0-a-9d6a1ac7ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.258 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.302 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.302 [INFO][5759] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-9d6a1ac7ae' Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.303 [INFO][5759] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.307 [INFO][5759] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.310 [INFO][5759] ipam/ipam.go 489: Trying affinity for 192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.311 [INFO][5759] ipam/ipam.go 155: Attempting to load block cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.312 [INFO][5759] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.312 [INFO][5759] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.313 [INFO][5759] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.315 [INFO][5759] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.318 [INFO][5759] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.38.68/26] block=192.168.38.64/26 handle="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.318 [INFO][5759] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.68/26] handle="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.318 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:23:29.325778 containerd[1790]: 2025-01-30 13:23:29.318 [INFO][5759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.68/26] IPv6=[] ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" HandleID="k8s-pod-network.345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" Jan 30 13:23:29.326443 containerd[1790]: 2025-01-30 13:23:29.319 [INFO][5651] cni-plugin/k8s.go 386: Populated endpoint ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b216c652-2303-4f3f-bb63-04ccb5f59378", ResourceVersion:"591", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"", Pod:"csi-node-driver-fdxpb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa019ab07c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.326443 containerd[1790]: 2025-01-30 13:23:29.319 [INFO][5651] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.38.68/32] ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" Jan 30 13:23:29.326443 containerd[1790]: 2025-01-30 13:23:29.319 [INFO][5651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa019ab07c8 ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" Jan 30 13:23:29.326443 containerd[1790]: 2025-01-30 13:23:29.320 [INFO][5651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" Jan 30 13:23:29.326443 containerd[1790]: 2025-01-30 13:23:29.320 [INFO][5651] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b216c652-2303-4f3f-bb63-04ccb5f59378", ResourceVersion:"591", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f", Pod:"csi-node-driver-fdxpb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa019ab07c8", MAC:"8a:e9:18:04:a5:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.326443 containerd[1790]: 2025-01-30 13:23:29.324 [INFO][5651] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f" Namespace="calico-system" Pod="csi-node-driver-fdxpb" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-csi--node--driver--fdxpb-eth0" Jan 30 13:23:29.327981 systemd[1]: Started cri-containerd-64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b.scope - libcontainer container 64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b. Jan 30 13:23:29.336596 containerd[1790]: time="2025-01-30T13:23:29.336453018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:29.336596 containerd[1790]: time="2025-01-30T13:23:29.336507030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:29.336596 containerd[1790]: time="2025-01-30T13:23:29.336523511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.336798 containerd[1790]: time="2025-01-30T13:23:29.336595574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.339047 systemd-networkd[1702]: calide76ed92556: Link UP Jan 30 13:23:29.339199 systemd-networkd[1702]: calide76ed92556: Gained carrier Jan 30 13:23:29.343967 containerd[1790]: time="2025-01-30T13:23:29.343939996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-9rtwp,Uid:b8bd5def-dd66-4c62-af8b-6e8cab979050,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9\"" Jan 30 13:23:29.345116 containerd[1790]: time="2025-01-30T13:23:29.345045931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.233 [INFO][5677] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.238 [INFO][5677] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0 coredns-7db6d8ff4d- kube-system e0884c07-f701-4d19-90b6-f7fc2b65a03a 658 0 2025-01-30 13:23:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-9d6a1ac7ae coredns-7db6d8ff4d-ckrtn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide76ed92556 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.238 [INFO][5677] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.257 [INFO][5778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" HandleID="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.260 [INFO][5778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" HandleID="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367e60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-9d6a1ac7ae", "pod":"coredns-7db6d8ff4d-ckrtn", "timestamp":"2025-01-30 13:23:29.257187776 +0000 UTC"}, Hostname:"ci-4186.1.0-a-9d6a1ac7ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.261 [INFO][5778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.318 [INFO][5778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.318 [INFO][5778] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-9d6a1ac7ae' Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.319 [INFO][5778] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.322 [INFO][5778] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.326 [INFO][5778] ipam/ipam.go 489: Trying affinity for 192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.327 [INFO][5778] ipam/ipam.go 155: Attempting to load block cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.329 [INFO][5778] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.329 [INFO][5778] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.330 [INFO][5778] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39 Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.332 [INFO][5778] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.336 [INFO][5778] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.38.69/26] block=192.168.38.64/26 handle="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.336 [INFO][5778] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.69/26] handle="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.336 [INFO][5778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:23:29.345645 containerd[1790]: 2025-01-30 13:23:29.336 [INFO][5778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.69/26] IPv6=[] ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" HandleID="k8s-pod-network.71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" Jan 30 13:23:29.346154 containerd[1790]: 2025-01-30 13:23:29.338 [INFO][5677] cni-plugin/k8s.go 386: Populated endpoint ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e0884c07-f701-4d19-90b6-f7fc2b65a03a", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"", Pod:"coredns-7db6d8ff4d-ckrtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide76ed92556", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.346154 containerd[1790]: 2025-01-30 13:23:29.338 [INFO][5677] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.38.69/32] ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" Jan 30 13:23:29.346154 containerd[1790]: 2025-01-30 13:23:29.338 [INFO][5677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide76ed92556 ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" Jan 30 13:23:29.346154 containerd[1790]: 2025-01-30 13:23:29.339 [INFO][5677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" Jan 30 13:23:29.346154 containerd[1790]: 2025-01-30 13:23:29.339 [INFO][5677] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e0884c07-f701-4d19-90b6-f7fc2b65a03a", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39", Pod:"coredns-7db6d8ff4d-ckrtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide76ed92556", MAC:"ea:32:3d:7c:ef:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.346154 containerd[1790]: 2025-01-30 13:23:29.344 [INFO][5677] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ckrtn" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--ckrtn-eth0" Jan 30 13:23:29.355412 containerd[1790]: time="2025-01-30T13:23:29.355365819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:29.355412 containerd[1790]: time="2025-01-30T13:23:29.355400626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:29.355412 containerd[1790]: time="2025-01-30T13:23:29.355408669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.355562 containerd[1790]: time="2025-01-30T13:23:29.355482297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.355765 systemd-networkd[1702]: calif14d13f3e14: Link UP Jan 30 13:23:29.355884 systemd-networkd[1702]: calif14d13f3e14: Gained carrier Jan 30 13:23:29.356660 systemd[1]: Started cri-containerd-345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f.scope - libcontainer container 345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f. Jan 30 13:23:29.357379 containerd[1790]: time="2025-01-30T13:23:29.357351443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f78c5dd9-226tt,Uid:8c3dd57d-2dde-4663-8d6a-5bcd3da5e6a9,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4\"" Jan 30 13:23:29.365389 containerd[1790]: time="2025-01-30T13:23:29.365354630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9c4bc7cd-rnfv6,Uid:6f232251-0174-4a52-bf07-5a185bd64bf0,Namespace:calico-system,Attempt:5,} returns sandbox id \"64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b\"" Jan 30 13:23:29.368384 systemd[1]: Started cri-containerd-71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39.scope - libcontainer container 71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39. Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.234 [INFO][5689] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.240 [INFO][5689] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0 coredns-7db6d8ff4d- kube-system 24717288-c144-4929-abc3-3991af241c87 654 0 2025-01-30 13:23:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-9d6a1ac7ae coredns-7db6d8ff4d-bqbqv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif14d13f3e14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.240 [INFO][5689] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.257 [INFO][5787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" HandleID="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.261 [INFO][5787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" HandleID="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003654f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-9d6a1ac7ae", "pod":"coredns-7db6d8ff4d-bqbqv", "timestamp":"2025-01-30 13:23:29.257285228 +0000 UTC"}, Hostname:"ci-4186.1.0-a-9d6a1ac7ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.261 [INFO][5787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.336 [INFO][5787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.336 [INFO][5787] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-9d6a1ac7ae' Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.337 [INFO][5787] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.341 [INFO][5787] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.345 [INFO][5787] ipam/ipam.go 489: Trying affinity for 192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.346 [INFO][5787] ipam/ipam.go 155: Attempting to load block cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.347 [INFO][5787] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.347 [INFO][5787] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.348 [INFO][5787] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.350 [INFO][5787] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.353 [INFO][5787] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.38.70/26] block=192.168.38.64/26 handle="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.353 [INFO][5787] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.70/26] handle="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" host="ci-4186.1.0-a-9d6a1ac7ae" Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.353 [INFO][5787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:23:29.370033 containerd[1790]: 2025-01-30 13:23:29.353 [INFO][5787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.70/26] IPv6=[] ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" HandleID="k8s-pod-network.361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Workload="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" Jan 30 13:23:29.370701 containerd[1790]: 2025-01-30 13:23:29.354 [INFO][5689] cni-plugin/k8s.go 386: Populated endpoint ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"24717288-c144-4929-abc3-3991af241c87", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"", Pod:"coredns-7db6d8ff4d-bqbqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif14d13f3e14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.370701 containerd[1790]: 2025-01-30 13:23:29.354 [INFO][5689] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.38.70/32] ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" Jan 30 13:23:29.370701 containerd[1790]: 2025-01-30 13:23:29.354 [INFO][5689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif14d13f3e14 ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" Jan 30 13:23:29.370701 containerd[1790]: 2025-01-30 13:23:29.355 [INFO][5689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" Jan 30 13:23:29.370701 containerd[1790]: 2025-01-30 13:23:29.356 [INFO][5689] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"24717288-c144-4929-abc3-3991af241c87", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-9d6a1ac7ae", ContainerID:"361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa", Pod:"coredns-7db6d8ff4d-bqbqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif14d13f3e14", MAC:"fe:13:f1:9a:f0:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:23:29.370701 containerd[1790]: 2025-01-30 13:23:29.364 [INFO][5689] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bqbqv" WorkloadEndpoint="ci--4186.1.0--a--9d6a1ac7ae-k8s-coredns--7db6d8ff4d--bqbqv-eth0" Jan 30 13:23:29.376803 containerd[1790]: time="2025-01-30T13:23:29.376784083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdxpb,Uid:b216c652-2303-4f3f-bb63-04ccb5f59378,Namespace:calico-system,Attempt:5,} returns sandbox id \"345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f\"" Jan 30 13:23:29.379353 containerd[1790]: time="2025-01-30T13:23:29.379319450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:23:29.379353 containerd[1790]: time="2025-01-30T13:23:29.379348943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:23:29.379452 containerd[1790]: time="2025-01-30T13:23:29.379357028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.379452 containerd[1790]: time="2025-01-30T13:23:29.379403199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:23:29.398640 systemd[1]: Started cri-containerd-361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa.scope - libcontainer container 361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa. Jan 30 13:23:29.403232 containerd[1790]: time="2025-01-30T13:23:29.403210752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckrtn,Uid:e0884c07-f701-4d19-90b6-f7fc2b65a03a,Namespace:kube-system,Attempt:5,} returns sandbox id \"71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39\"" Jan 30 13:23:29.404406 containerd[1790]: time="2025-01-30T13:23:29.404392948Z" level=info msg="CreateContainer within sandbox \"71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:23:29.409709 containerd[1790]: time="2025-01-30T13:23:29.409688170Z" level=info msg="CreateContainer within sandbox \"71f0f9373262ad24d7a51c5e09fcce852f2663cb0b6475ccc87129370f597d39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e97b9409face119f9cc699ae6db9379043f7210b35f7a886a37b57de2cd184dd\"" Jan 30 13:23:29.409943 containerd[1790]: time="2025-01-30T13:23:29.409900094Z" level=info msg="StartContainer for \"e97b9409face119f9cc699ae6db9379043f7210b35f7a886a37b57de2cd184dd\"" Jan 30 13:23:29.420205 systemd[1]: Started cri-containerd-e97b9409face119f9cc699ae6db9379043f7210b35f7a886a37b57de2cd184dd.scope - libcontainer container e97b9409face119f9cc699ae6db9379043f7210b35f7a886a37b57de2cd184dd. Jan 30 13:23:29.421653 containerd[1790]: time="2025-01-30T13:23:29.421632487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqbqv,Uid:24717288-c144-4929-abc3-3991af241c87,Namespace:kube-system,Attempt:5,} returns sandbox id \"361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa\"" Jan 30 13:23:29.422763 containerd[1790]: time="2025-01-30T13:23:29.422751647Z" level=info msg="CreateContainer within sandbox \"361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:23:29.427782 containerd[1790]: time="2025-01-30T13:23:29.427703667Z" level=info msg="CreateContainer within sandbox \"361f41534202aa01665f53faa11c2d021d28bc0b078b7a2cdf718649035850fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a447af5a6f739c2412c44d1dc638bce76d8236b34301520e2cb31dcc69bda6e1\"" Jan 30 13:23:29.427979 containerd[1790]: time="2025-01-30T13:23:29.427965682Z" level=info msg="StartContainer for \"a447af5a6f739c2412c44d1dc638bce76d8236b34301520e2cb31dcc69bda6e1\"" Jan 30 13:23:29.432532 containerd[1790]: time="2025-01-30T13:23:29.432504873Z" level=info msg="StartContainer for \"e97b9409face119f9cc699ae6db9379043f7210b35f7a886a37b57de2cd184dd\" returns successfully" Jan 30 13:23:29.454880 systemd[1]: Started cri-containerd-a447af5a6f739c2412c44d1dc638bce76d8236b34301520e2cb31dcc69bda6e1.scope - libcontainer container a447af5a6f739c2412c44d1dc638bce76d8236b34301520e2cb31dcc69bda6e1. Jan 30 13:23:29.466598 containerd[1790]: time="2025-01-30T13:23:29.466575191Z" level=info msg="StartContainer for \"a447af5a6f739c2412c44d1dc638bce76d8236b34301520e2cb31dcc69bda6e1\" returns successfully" Jan 30 13:23:30.196616 kubelet[3262]: I0130 13:23:30.196565 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:23:30.197680 kubelet[3262]: I0130 13:23:30.197649 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ckrtn" podStartSLOduration=19.197638652 podStartE2EDuration="19.197638652s" podCreationTimestamp="2025-01-30 13:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:23:30.197408239 +0000 UTC m=+32.162828774" watchObservedRunningTime="2025-01-30 13:23:30.197638652 +0000 UTC m=+32.163059184" Jan 30 13:23:30.204428 kubelet[3262]: I0130 13:23:30.204390 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bqbqv" podStartSLOduration=19.204372329 podStartE2EDuration="19.204372329s" podCreationTimestamp="2025-01-30 13:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:23:30.204236972 +0000 UTC m=+32.169657528" watchObservedRunningTime="2025-01-30 13:23:30.204372329 +0000 UTC m=+32.169792863" Jan 30 13:23:30.490645 systemd-networkd[1702]: califbe26cff229: Gained IPv6LL Jan 30 13:23:30.618610 systemd-networkd[1702]: cali0dc998ba2d7: Gained IPv6LL Jan 30 13:23:30.810590 systemd-networkd[1702]: caliaa019ab07c8: Gained IPv6LL Jan 30 13:23:30.810897 systemd-networkd[1702]: calif14d13f3e14: Gained IPv6LL Jan 30 13:23:30.938551 systemd-networkd[1702]: calide76ed92556: Gained IPv6LL Jan 30 13:23:31.002566 systemd-networkd[1702]: cali520923d0d18: Gained IPv6LL Jan 30 13:23:31.042589 containerd[1790]: time="2025-01-30T13:23:31.042537298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:31.042784 containerd[1790]: time="2025-01-30T13:23:31.042746295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:23:31.043129 containerd[1790]: time="2025-01-30T13:23:31.043086027Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:31.044116 containerd[1790]: time="2025-01-30T13:23:31.044075959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:31.044537 containerd[1790]: time="2025-01-30T13:23:31.044485295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.699411921s" Jan 30 13:23:31.044537 containerd[1790]: time="2025-01-30T13:23:31.044502113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:23:31.045027 containerd[1790]: time="2025-01-30T13:23:31.044988137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:23:31.045601 containerd[1790]: time="2025-01-30T13:23:31.045564506Z" level=info msg="CreateContainer within sandbox \"19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:23:31.049547 containerd[1790]: time="2025-01-30T13:23:31.049492793Z" level=info msg="CreateContainer within sandbox \"19235e0dd98b51e32ffda79f1dd76a966c3a99ef788498e14870b6ec3c0f90b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3859c8d713778ee1df2e9a442bde2d3412b1a90feeccb078844363adba9ebb41\"" Jan 30 13:23:31.049750 containerd[1790]: time="2025-01-30T13:23:31.049736546Z" level=info msg="StartContainer for \"3859c8d713778ee1df2e9a442bde2d3412b1a90feeccb078844363adba9ebb41\"" Jan 30 13:23:31.069778 systemd[1]: Started cri-containerd-3859c8d713778ee1df2e9a442bde2d3412b1a90feeccb078844363adba9ebb41.scope - libcontainer container 3859c8d713778ee1df2e9a442bde2d3412b1a90feeccb078844363adba9ebb41. Jan 30 13:23:31.093301 containerd[1790]: time="2025-01-30T13:23:31.093256816Z" level=info msg="StartContainer for \"3859c8d713778ee1df2e9a442bde2d3412b1a90feeccb078844363adba9ebb41\" returns successfully" Jan 30 13:23:31.205352 kubelet[3262]: I0130 13:23:31.205318 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57f78c5dd9-9rtwp" podStartSLOduration=12.505160002 podStartE2EDuration="14.205305093s" podCreationTimestamp="2025-01-30 13:23:17 +0000 UTC" firstStartedPulling="2025-01-30 13:23:29.344790969 +0000 UTC m=+31.310211499" lastFinishedPulling="2025-01-30 13:23:31.04493606 +0000 UTC m=+33.010356590" observedRunningTime="2025-01-30 13:23:31.205033391 +0000 UTC m=+33.170453931" watchObservedRunningTime="2025-01-30 13:23:31.205305093 +0000 UTC m=+33.170725627" Jan 30 13:23:31.407997 containerd[1790]: time="2025-01-30T13:23:31.407970961Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:31.408122 containerd[1790]: time="2025-01-30T13:23:31.408103331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:23:31.409449 containerd[1790]: time="2025-01-30T13:23:31.409432315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 364.428851ms" Jan 30 13:23:31.409490 containerd[1790]: time="2025-01-30T13:23:31.409451656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:23:31.410008 containerd[1790]: time="2025-01-30T13:23:31.409997465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:23:31.410686 containerd[1790]: time="2025-01-30T13:23:31.410651153Z" level=info msg="CreateContainer within sandbox \"65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:23:31.415419 containerd[1790]: time="2025-01-30T13:23:31.415402107Z" level=info msg="CreateContainer within sandbox \"65dee0859977355257dbb779e091376cc3bcdd65689eff24b0214805c86321f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a10cdf179a5577c7a7298483580ec20ac3e1128b76c76fecabf741113f55f96\"" Jan 30 13:23:31.415696 containerd[1790]: time="2025-01-30T13:23:31.415684768Z" level=info msg="StartContainer for \"8a10cdf179a5577c7a7298483580ec20ac3e1128b76c76fecabf741113f55f96\"" Jan 30 13:23:31.439676 systemd[1]: Started cri-containerd-8a10cdf179a5577c7a7298483580ec20ac3e1128b76c76fecabf741113f55f96.scope - libcontainer container 8a10cdf179a5577c7a7298483580ec20ac3e1128b76c76fecabf741113f55f96. Jan 30 13:23:31.466584 containerd[1790]: time="2025-01-30T13:23:31.466557394Z" level=info msg="StartContainer for \"8a10cdf179a5577c7a7298483580ec20ac3e1128b76c76fecabf741113f55f96\" returns successfully" Jan 30 13:23:32.210337 kubelet[3262]: I0130 13:23:32.210283 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:23:33.030210 containerd[1790]: time="2025-01-30T13:23:33.030156624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:33.030411 containerd[1790]: time="2025-01-30T13:23:33.030392873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:23:33.030685 containerd[1790]: time="2025-01-30T13:23:33.030641309Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:33.031736 containerd[1790]: time="2025-01-30T13:23:33.031696087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:33.032467 containerd[1790]: time="2025-01-30T13:23:33.032426898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.622415568s" Jan 30 13:23:33.032467 containerd[1790]: time="2025-01-30T13:23:33.032441944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:23:33.032952 containerd[1790]: time="2025-01-30T13:23:33.032916196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:23:33.036013 containerd[1790]: time="2025-01-30T13:23:33.035964998Z" level=info msg="CreateContainer within sandbox \"64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:23:33.057217 containerd[1790]: time="2025-01-30T13:23:33.057172708Z" level=info msg="CreateContainer within sandbox \"64bfbd385072715fdc34c49dfb48d5677309d2d1050fd11991bcc9fcbfa1243b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"269e3d7b136341bfb9fd31cdca6f677d334a9039d8328b7aecf309de127c7c9d\"" Jan 30 13:23:33.057414 containerd[1790]: time="2025-01-30T13:23:33.057402150Z" level=info msg="StartContainer for \"269e3d7b136341bfb9fd31cdca6f677d334a9039d8328b7aecf309de127c7c9d\"" Jan 30 13:23:33.080730 systemd[1]: Started cri-containerd-269e3d7b136341bfb9fd31cdca6f677d334a9039d8328b7aecf309de127c7c9d.scope - libcontainer container 269e3d7b136341bfb9fd31cdca6f677d334a9039d8328b7aecf309de127c7c9d. Jan 30 13:23:33.104115 containerd[1790]: time="2025-01-30T13:23:33.104095188Z" level=info msg="StartContainer for \"269e3d7b136341bfb9fd31cdca6f677d334a9039d8328b7aecf309de127c7c9d\" returns successfully" Jan 30 13:23:33.220466 kubelet[3262]: I0130 13:23:33.220396 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:23:33.241599 kubelet[3262]: I0130 13:23:33.241429 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57f78c5dd9-226tt" podStartSLOduration=14.189876626 podStartE2EDuration="16.241381561s" podCreationTimestamp="2025-01-30 13:23:17 +0000 UTC" firstStartedPulling="2025-01-30 13:23:29.358452894 +0000 UTC m=+31.323873432" lastFinishedPulling="2025-01-30 13:23:31.409957837 +0000 UTC m=+33.375378367" observedRunningTime="2025-01-30 13:23:32.232036737 +0000 UTC m=+34.197457336" watchObservedRunningTime="2025-01-30 13:23:33.241381561 +0000 UTC m=+35.206802151" Jan 30 13:23:34.273730 kubelet[3262]: I0130 13:23:34.273683 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b9c4bc7cd-rnfv6" podStartSLOduration=12.607523895 podStartE2EDuration="16.27366588s" podCreationTimestamp="2025-01-30 13:23:18 +0000 UTC" firstStartedPulling="2025-01-30 13:23:29.366716418 +0000 UTC m=+31.332136956" lastFinishedPulling="2025-01-30 13:23:33.032858404 +0000 UTC m=+34.998278941" observedRunningTime="2025-01-30 13:23:33.241235146 +0000 UTC m=+35.206655794" watchObservedRunningTime="2025-01-30 13:23:34.27366588 +0000 UTC m=+36.239086408" Jan 30 13:23:34.314470 containerd[1790]: time="2025-01-30T13:23:34.314440014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:34.314804 containerd[1790]: time="2025-01-30T13:23:34.314665064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:23:34.315080 containerd[1790]: time="2025-01-30T13:23:34.315067193Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:34.316957 containerd[1790]: time="2025-01-30T13:23:34.316915101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:34.317372 containerd[1790]: time="2025-01-30T13:23:34.317322274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.284389507s" Jan 30 13:23:34.317372 containerd[1790]: time="2025-01-30T13:23:34.317336752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:23:34.318849 containerd[1790]: time="2025-01-30T13:23:34.318778481Z" level=info msg="CreateContainer within sandbox \"345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:23:34.324626 containerd[1790]: time="2025-01-30T13:23:34.324574762Z" level=info msg="CreateContainer within sandbox \"345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9aa6e3504423d290b34476a9bf903c663b00f71cf8ebd6f64470deedd3d48cb0\"" Jan 30 13:23:34.324993 containerd[1790]: time="2025-01-30T13:23:34.324959987Z" level=info msg="StartContainer for \"9aa6e3504423d290b34476a9bf903c663b00f71cf8ebd6f64470deedd3d48cb0\"" Jan 30 13:23:34.346633 systemd[1]: Started cri-containerd-9aa6e3504423d290b34476a9bf903c663b00f71cf8ebd6f64470deedd3d48cb0.scope - libcontainer container 9aa6e3504423d290b34476a9bf903c663b00f71cf8ebd6f64470deedd3d48cb0. Jan 30 13:23:34.360561 containerd[1790]: time="2025-01-30T13:23:34.360508842Z" level=info msg="StartContainer for \"9aa6e3504423d290b34476a9bf903c663b00f71cf8ebd6f64470deedd3d48cb0\" returns successfully" Jan 30 13:23:34.361061 containerd[1790]: time="2025-01-30T13:23:34.361046465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:23:35.790055 containerd[1790]: time="2025-01-30T13:23:35.790030372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:35.790320 containerd[1790]: time="2025-01-30T13:23:35.790263543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:23:35.790664 containerd[1790]: time="2025-01-30T13:23:35.790649988Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:35.791640 containerd[1790]: time="2025-01-30T13:23:35.791628147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:23:35.792067 containerd[1790]: time="2025-01-30T13:23:35.792052591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.43098801s" Jan 30 13:23:35.792116 containerd[1790]: time="2025-01-30T13:23:35.792070219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:23:35.793074 containerd[1790]: time="2025-01-30T13:23:35.793061618Z" level=info msg="CreateContainer within sandbox \"345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:23:35.797568 containerd[1790]: time="2025-01-30T13:23:35.797523886Z" level=info msg="CreateContainer within sandbox \"345fb23f21b48251c220f53540bd4160aa4828c5b63b770fc2d8c2d18f9dce7f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1dfedd381d35e0168889d6f35b27e00a5ff2aa8c8f3fcec3d6f2c38f15db3955\"" Jan 30 13:23:35.797760 containerd[1790]: time="2025-01-30T13:23:35.797687326Z" level=info msg="StartContainer for \"1dfedd381d35e0168889d6f35b27e00a5ff2aa8c8f3fcec3d6f2c38f15db3955\"" Jan 30 13:23:35.820644 systemd[1]: Started cri-containerd-1dfedd381d35e0168889d6f35b27e00a5ff2aa8c8f3fcec3d6f2c38f15db3955.scope - libcontainer container 1dfedd381d35e0168889d6f35b27e00a5ff2aa8c8f3fcec3d6f2c38f15db3955. Jan 30 13:23:35.834119 containerd[1790]: time="2025-01-30T13:23:35.834100939Z" level=info msg="StartContainer for \"1dfedd381d35e0168889d6f35b27e00a5ff2aa8c8f3fcec3d6f2c38f15db3955\" returns successfully" Jan 30 13:23:36.111956 kubelet[3262]: I0130 13:23:36.111939 3262 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:23:36.111956 kubelet[3262]: I0130 13:23:36.111957 3262 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:23:36.257133 kubelet[3262]: I0130 13:23:36.257104 3262 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fdxpb" podStartSLOduration=12.842147062 podStartE2EDuration="19.2570925s" podCreationTimestamp="2025-01-30 13:23:17 +0000 UTC" firstStartedPulling="2025-01-30 13:23:29.377455155 +0000 UTC m=+31.342875685" lastFinishedPulling="2025-01-30 13:23:35.792400587 +0000 UTC m=+37.757821123" observedRunningTime="2025-01-30 13:23:36.256953538 +0000 UTC m=+38.222374084" watchObservedRunningTime="2025-01-30 13:23:36.2570925 +0000 UTC m=+38.222513028" Jan 30 13:23:39.703394 kubelet[3262]: I0130 13:23:39.703282 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:23:40.493052 kubelet[3262]: I0130 13:23:40.492931 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:23:40.921493 kernel: bpftool[7167]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:23:41.070684 systemd-networkd[1702]: vxlan.calico: Link UP Jan 30 13:23:41.070687 systemd-networkd[1702]: vxlan.calico: Gained carrier Jan 30 13:23:42.330839 systemd-networkd[1702]: vxlan.calico: Gained IPv6LL Jan 30 13:23:53.785955 kubelet[3262]: I0130 13:23:53.785741 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:23:58.079076 containerd[1790]: time="2025-01-30T13:23:58.079053300Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:58.079346 containerd[1790]: time="2025-01-30T13:23:58.079120390Z" level=info msg="TearDown network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" successfully" Jan 30 13:23:58.079346 containerd[1790]: time="2025-01-30T13:23:58.079130612Z" level=info msg="StopPodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" returns successfully" Jan 30 13:23:58.079387 containerd[1790]: time="2025-01-30T13:23:58.079352474Z" level=info msg="RemovePodSandbox for \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:58.079387 containerd[1790]: time="2025-01-30T13:23:58.079368875Z" level=info msg="Forcibly stopping sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\"" Jan 30 13:23:58.079435 containerd[1790]: time="2025-01-30T13:23:58.079409943Z" level=info msg="TearDown network for sandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" successfully" Jan 30 13:23:58.081280 containerd[1790]: time="2025-01-30T13:23:58.081267632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.081322 containerd[1790]: time="2025-01-30T13:23:58.081291305Z" level=info msg="RemovePodSandbox \"26e5ec312021cc2d55c6d934528cd54a8e3d4f7271079f3995a5e7f336c44422\" returns successfully" Jan 30 13:23:58.081495 containerd[1790]: time="2025-01-30T13:23:58.081485250Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" Jan 30 13:23:58.081536 containerd[1790]: time="2025-01-30T13:23:58.081528223Z" level=info msg="TearDown network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" successfully" Jan 30 13:23:58.081559 containerd[1790]: time="2025-01-30T13:23:58.081535353Z" level=info msg="StopPodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" returns successfully" Jan 30 13:23:58.081661 containerd[1790]: time="2025-01-30T13:23:58.081651838Z" level=info msg="RemovePodSandbox for \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" Jan 30 13:23:58.081683 containerd[1790]: time="2025-01-30T13:23:58.081663949Z" level=info msg="Forcibly stopping sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\"" Jan 30 13:23:58.081707 containerd[1790]: time="2025-01-30T13:23:58.081700735Z" level=info msg="TearDown network for sandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" successfully" Jan 30 13:23:58.082827 containerd[1790]: time="2025-01-30T13:23:58.082817766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.082852 containerd[1790]: time="2025-01-30T13:23:58.082837092Z" level=info msg="RemovePodSandbox \"9369b15257605be63fe6b61ce0bd428b3f387b193bb6ff9dda88b3811c3f9772\" returns successfully" Jan 30 13:23:58.082971 containerd[1790]: time="2025-01-30T13:23:58.082962537Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\"" Jan 30 13:23:58.083021 containerd[1790]: time="2025-01-30T13:23:58.082999017Z" level=info msg="TearDown network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" successfully" Jan 30 13:23:58.083021 containerd[1790]: time="2025-01-30T13:23:58.083020122Z" level=info msg="StopPodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" returns successfully" Jan 30 13:23:58.083145 containerd[1790]: time="2025-01-30T13:23:58.083136853Z" level=info msg="RemovePodSandbox for \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\"" Jan 30 13:23:58.083320 containerd[1790]: time="2025-01-30T13:23:58.083147822Z" level=info msg="Forcibly stopping sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\"" Jan 30 13:23:58.083320 containerd[1790]: time="2025-01-30T13:23:58.083218868Z" level=info msg="TearDown network for sandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" successfully" Jan 30 13:23:58.084456 containerd[1790]: time="2025-01-30T13:23:58.084415625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.084456 containerd[1790]: time="2025-01-30T13:23:58.084433554Z" level=info msg="RemovePodSandbox \"260bf42e411f71d5685cfaef06e6302bf015ca87d341dd3edde461edf6ed0632\" returns successfully" Jan 30 13:23:58.084674 containerd[1790]: time="2025-01-30T13:23:58.084621767Z" level=info msg="StopPodSandbox for \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\"" Jan 30 13:23:58.084721 containerd[1790]: time="2025-01-30T13:23:58.084709313Z" level=info msg="TearDown network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" successfully" Jan 30 13:23:58.084721 containerd[1790]: time="2025-01-30T13:23:58.084715897Z" level=info msg="StopPodSandbox for \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" returns successfully" Jan 30 13:23:58.085034 containerd[1790]: time="2025-01-30T13:23:58.084968073Z" level=info msg="RemovePodSandbox for \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\"" Jan 30 13:23:58.085034 containerd[1790]: time="2025-01-30T13:23:58.084980223Z" level=info msg="Forcibly stopping sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\"" Jan 30 13:23:58.085097 containerd[1790]: time="2025-01-30T13:23:58.085035511Z" level=info msg="TearDown network for sandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" successfully" Jan 30 13:23:58.086254 containerd[1790]: time="2025-01-30T13:23:58.086233230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.086254 containerd[1790]: time="2025-01-30T13:23:58.086250882Z" level=info msg="RemovePodSandbox \"a3f79b70d5b3cdd33271056a975083e532f37bc7f5dbfe40eb101cb4cde27b5d\" returns successfully" Jan 30 13:23:58.086391 containerd[1790]: time="2025-01-30T13:23:58.086382892Z" level=info msg="StopPodSandbox for \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\"" Jan 30 13:23:58.086428 containerd[1790]: time="2025-01-30T13:23:58.086421924Z" level=info msg="TearDown network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\" successfully" Jan 30 13:23:58.086448 containerd[1790]: time="2025-01-30T13:23:58.086428821Z" level=info msg="StopPodSandbox for \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\" returns successfully" Jan 30 13:23:58.086534 containerd[1790]: time="2025-01-30T13:23:58.086525221Z" level=info msg="RemovePodSandbox for \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\"" Jan 30 13:23:58.086559 containerd[1790]: time="2025-01-30T13:23:58.086538946Z" level=info msg="Forcibly stopping sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\"" Jan 30 13:23:58.086598 containerd[1790]: time="2025-01-30T13:23:58.086581637Z" level=info msg="TearDown network for sandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\" successfully" Jan 30 13:23:58.087711 containerd[1790]: time="2025-01-30T13:23:58.087699868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.087739 containerd[1790]: time="2025-01-30T13:23:58.087722987Z" level=info msg="RemovePodSandbox \"0d6511fd91373020d4696ff109c9233ad3fcfea3ba8da58e9ef388c49d93c6a2\" returns successfully" Jan 30 13:23:58.087841 containerd[1790]: time="2025-01-30T13:23:58.087832780Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:58.087879 containerd[1790]: time="2025-01-30T13:23:58.087871861Z" level=info msg="TearDown network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" successfully" Jan 30 13:23:58.087900 containerd[1790]: time="2025-01-30T13:23:58.087878498Z" level=info msg="StopPodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" returns successfully" Jan 30 13:23:58.088012 containerd[1790]: time="2025-01-30T13:23:58.087986514Z" level=info msg="RemovePodSandbox for \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:58.088038 containerd[1790]: time="2025-01-30T13:23:58.088016330Z" level=info msg="Forcibly stopping sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\"" Jan 30 13:23:58.088059 containerd[1790]: time="2025-01-30T13:23:58.088044830Z" level=info msg="TearDown network for sandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" successfully" Jan 30 13:23:58.089248 containerd[1790]: time="2025-01-30T13:23:58.089205131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.089275 containerd[1790]: time="2025-01-30T13:23:58.089253687Z" level=info msg="RemovePodSandbox \"605427072eabd013e508533040a7c444f754287af529bf6be84179422517461f\" returns successfully" Jan 30 13:23:58.089361 containerd[1790]: time="2025-01-30T13:23:58.089352412Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" Jan 30 13:23:58.089404 containerd[1790]: time="2025-01-30T13:23:58.089396717Z" level=info msg="TearDown network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" successfully" Jan 30 13:23:58.089428 containerd[1790]: time="2025-01-30T13:23:58.089403988Z" level=info msg="StopPodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" returns successfully" Jan 30 13:23:58.089525 containerd[1790]: time="2025-01-30T13:23:58.089501813Z" level=info msg="RemovePodSandbox for \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" Jan 30 13:23:58.089525 containerd[1790]: time="2025-01-30T13:23:58.089513495Z" level=info msg="Forcibly stopping sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\"" Jan 30 13:23:58.089594 containerd[1790]: time="2025-01-30T13:23:58.089563435Z" level=info msg="TearDown network for sandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" successfully" Jan 30 13:23:58.091034 containerd[1790]: time="2025-01-30T13:23:58.091022802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.091070 containerd[1790]: time="2025-01-30T13:23:58.091042162Z" level=info msg="RemovePodSandbox \"a4ea86a6e9811f9fd34947593b3be31504d5f8a9b4b126f81a27a935b2b4d529\" returns successfully" Jan 30 13:23:58.091237 containerd[1790]: time="2025-01-30T13:23:58.091209150Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\"" Jan 30 13:23:58.091302 containerd[1790]: time="2025-01-30T13:23:58.091294815Z" level=info msg="TearDown network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" successfully" Jan 30 13:23:58.091322 containerd[1790]: time="2025-01-30T13:23:58.091302608Z" level=info msg="StopPodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" returns successfully" Jan 30 13:23:58.091420 containerd[1790]: time="2025-01-30T13:23:58.091407241Z" level=info msg="RemovePodSandbox for \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\"" Jan 30 13:23:58.091460 containerd[1790]: time="2025-01-30T13:23:58.091420857Z" level=info msg="Forcibly stopping sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\"" Jan 30 13:23:58.091495 containerd[1790]: time="2025-01-30T13:23:58.091468261Z" level=info msg="TearDown network for sandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" successfully" Jan 30 13:23:58.092750 containerd[1790]: time="2025-01-30T13:23:58.092736143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.092813 containerd[1790]: time="2025-01-30T13:23:58.092759424Z" level=info msg="RemovePodSandbox \"d7d9f76f8d47b9b1cce05f1b9b45464cec4c9ed5ecf14b1033bb1ff4424e71ea\" returns successfully" Jan 30 13:23:58.093023 containerd[1790]: time="2025-01-30T13:23:58.093012262Z" level=info msg="StopPodSandbox for \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\"" Jan 30 13:23:58.093107 containerd[1790]: time="2025-01-30T13:23:58.093095825Z" level=info msg="TearDown network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" successfully" Jan 30 13:23:58.093184 containerd[1790]: time="2025-01-30T13:23:58.093105894Z" level=info msg="StopPodSandbox for \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" returns successfully" Jan 30 13:23:58.093288 containerd[1790]: time="2025-01-30T13:23:58.093277262Z" level=info msg="RemovePodSandbox for \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\"" Jan 30 13:23:58.093314 containerd[1790]: time="2025-01-30T13:23:58.093289949Z" level=info msg="Forcibly stopping sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\"" Jan 30 13:23:58.093337 containerd[1790]: time="2025-01-30T13:23:58.093322631Z" level=info msg="TearDown network for sandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" successfully" Jan 30 13:23:58.094633 containerd[1790]: time="2025-01-30T13:23:58.094622130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.094660 containerd[1790]: time="2025-01-30T13:23:58.094639920Z" level=info msg="RemovePodSandbox \"9669f91b316d07dfe4796be47b25db5cd672b20cd1afa11caf154fb882c7d36d\" returns successfully" Jan 30 13:23:58.094892 containerd[1790]: time="2025-01-30T13:23:58.094853340Z" level=info msg="StopPodSandbox for \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\"" Jan 30 13:23:58.094946 containerd[1790]: time="2025-01-30T13:23:58.094927227Z" level=info msg="TearDown network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\" successfully" Jan 30 13:23:58.094946 containerd[1790]: time="2025-01-30T13:23:58.094933471Z" level=info msg="StopPodSandbox for \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\" returns successfully" Jan 30 13:23:58.095076 containerd[1790]: time="2025-01-30T13:23:58.095065096Z" level=info msg="RemovePodSandbox for \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\"" Jan 30 13:23:58.095100 containerd[1790]: time="2025-01-30T13:23:58.095083120Z" level=info msg="Forcibly stopping sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\"" Jan 30 13:23:58.095155 containerd[1790]: time="2025-01-30T13:23:58.095115356Z" level=info msg="TearDown network for sandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\" successfully" Jan 30 13:23:58.096429 containerd[1790]: time="2025-01-30T13:23:58.096415796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.096460 containerd[1790]: time="2025-01-30T13:23:58.096435844Z" level=info msg="RemovePodSandbox \"205fcca239d665ed80a95ca2b62488d28a8fa435eadff1e72b40d117d2f5f002\" returns successfully" Jan 30 13:23:58.096741 containerd[1790]: time="2025-01-30T13:23:58.096672987Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:58.096819 containerd[1790]: time="2025-01-30T13:23:58.096747490Z" level=info msg="TearDown network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" successfully" Jan 30 13:23:58.096819 containerd[1790]: time="2025-01-30T13:23:58.096753732Z" level=info msg="StopPodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" returns successfully" Jan 30 13:23:58.096999 containerd[1790]: time="2025-01-30T13:23:58.096953942Z" level=info msg="RemovePodSandbox for \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:58.096999 containerd[1790]: time="2025-01-30T13:23:58.096978582Z" level=info msg="Forcibly stopping sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\"" Jan 30 13:23:58.097069 containerd[1790]: time="2025-01-30T13:23:58.097039739Z" level=info msg="TearDown network for sandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" successfully" Jan 30 13:23:58.098908 containerd[1790]: time="2025-01-30T13:23:58.098894422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.098953 containerd[1790]: time="2025-01-30T13:23:58.098916809Z" level=info msg="RemovePodSandbox \"656608d9c10f03e3d07a3419d5bb4f98e84b18d46601bf35b0c5762182d067e2\" returns successfully" Jan 30 13:23:58.099049 containerd[1790]: time="2025-01-30T13:23:58.099038879Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" Jan 30 13:23:58.099102 containerd[1790]: time="2025-01-30T13:23:58.099082208Z" level=info msg="TearDown network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" successfully" Jan 30 13:23:58.099123 containerd[1790]: time="2025-01-30T13:23:58.099102500Z" level=info msg="StopPodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" returns successfully" Jan 30 13:23:58.099282 containerd[1790]: time="2025-01-30T13:23:58.099272751Z" level=info msg="RemovePodSandbox for \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" Jan 30 13:23:58.099302 containerd[1790]: time="2025-01-30T13:23:58.099286448Z" level=info msg="Forcibly stopping sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\"" Jan 30 13:23:58.099342 containerd[1790]: time="2025-01-30T13:23:58.099323682Z" level=info msg="TearDown network for sandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" successfully" Jan 30 13:23:58.100468 containerd[1790]: time="2025-01-30T13:23:58.100457694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.100501 containerd[1790]: time="2025-01-30T13:23:58.100475973Z" level=info msg="RemovePodSandbox \"2b583543e27cf717dae7c979eabd167db1d4acd488a0fa9105de60d3679dbbdf\" returns successfully" Jan 30 13:23:58.100642 containerd[1790]: time="2025-01-30T13:23:58.100633486Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\"" Jan 30 13:23:58.100680 containerd[1790]: time="2025-01-30T13:23:58.100672456Z" level=info msg="TearDown network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" successfully" Jan 30 13:23:58.100701 containerd[1790]: time="2025-01-30T13:23:58.100679421Z" level=info msg="StopPodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" returns successfully" Jan 30 13:23:58.100780 containerd[1790]: time="2025-01-30T13:23:58.100770663Z" level=info msg="RemovePodSandbox for \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\"" Jan 30 13:23:58.100804 containerd[1790]: time="2025-01-30T13:23:58.100782353Z" level=info msg="Forcibly stopping sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\"" Jan 30 13:23:58.100834 containerd[1790]: time="2025-01-30T13:23:58.100818045Z" level=info msg="TearDown network for sandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" successfully" Jan 30 13:23:58.101961 containerd[1790]: time="2025-01-30T13:23:58.101920796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.101961 containerd[1790]: time="2025-01-30T13:23:58.101939131Z" level=info msg="RemovePodSandbox \"f5cb18cbbd624911d49ebba54dbf8b779fbad948887f7a190f8e00119939b0c6\" returns successfully" Jan 30 13:23:58.102180 containerd[1790]: time="2025-01-30T13:23:58.102149681Z" level=info msg="StopPodSandbox for \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\"" Jan 30 13:23:58.102218 containerd[1790]: time="2025-01-30T13:23:58.102213583Z" level=info msg="TearDown network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" successfully" Jan 30 13:23:58.102238 containerd[1790]: time="2025-01-30T13:23:58.102219823Z" level=info msg="StopPodSandbox for \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" returns successfully" Jan 30 13:23:58.102342 containerd[1790]: time="2025-01-30T13:23:58.102334295Z" level=info msg="RemovePodSandbox for \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\"" Jan 30 13:23:58.102360 containerd[1790]: time="2025-01-30T13:23:58.102344125Z" level=info msg="Forcibly stopping sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\"" Jan 30 13:23:58.102406 containerd[1790]: time="2025-01-30T13:23:58.102391239Z" level=info msg="TearDown network for sandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" successfully" Jan 30 13:23:58.103536 containerd[1790]: time="2025-01-30T13:23:58.103488727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.103585 containerd[1790]: time="2025-01-30T13:23:58.103559940Z" level=info msg="RemovePodSandbox \"3e4c5f66afb6b1eaabffc261098be855c35f10f13618084e5a8edab695c19987\" returns successfully" Jan 30 13:23:58.103762 containerd[1790]: time="2025-01-30T13:23:58.103724022Z" level=info msg="StopPodSandbox for \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\"" Jan 30 13:23:58.103823 containerd[1790]: time="2025-01-30T13:23:58.103763572Z" level=info msg="TearDown network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\" successfully" Jan 30 13:23:58.103823 containerd[1790]: time="2025-01-30T13:23:58.103769777Z" level=info msg="StopPodSandbox for \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\" returns successfully" Jan 30 13:23:58.104014 containerd[1790]: time="2025-01-30T13:23:58.103979249Z" level=info msg="RemovePodSandbox for \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\"" Jan 30 13:23:58.104014 containerd[1790]: time="2025-01-30T13:23:58.103991650Z" level=info msg="Forcibly stopping sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\"" Jan 30 13:23:58.104080 containerd[1790]: time="2025-01-30T13:23:58.104037490Z" level=info msg="TearDown network for sandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\" successfully" Jan 30 13:23:58.105231 containerd[1790]: time="2025-01-30T13:23:58.105186373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.105231 containerd[1790]: time="2025-01-30T13:23:58.105204468Z" level=info msg="RemovePodSandbox \"c73f361173258e60a7077692a40b991ae293073b98679716d406fc2bf80d46db\" returns successfully" Jan 30 13:23:58.105392 containerd[1790]: time="2025-01-30T13:23:58.105382849Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:58.105429 containerd[1790]: time="2025-01-30T13:23:58.105422203Z" level=info msg="TearDown network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" successfully" Jan 30 13:23:58.105448 containerd[1790]: time="2025-01-30T13:23:58.105428952Z" level=info msg="StopPodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" returns successfully" Jan 30 13:23:58.105627 containerd[1790]: time="2025-01-30T13:23:58.105569250Z" level=info msg="RemovePodSandbox for \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:58.105627 containerd[1790]: time="2025-01-30T13:23:58.105601235Z" level=info msg="Forcibly stopping sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\"" Jan 30 13:23:58.105725 containerd[1790]: time="2025-01-30T13:23:58.105685669Z" level=info msg="TearDown network for sandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" successfully" Jan 30 13:23:58.106886 containerd[1790]: time="2025-01-30T13:23:58.106844655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.106886 containerd[1790]: time="2025-01-30T13:23:58.106882382Z" level=info msg="RemovePodSandbox \"4248680d819e0ec095c551664af1513a10f755fadb9f50bb5c9af675c7b3e4b9\" returns successfully" Jan 30 13:23:58.107114 containerd[1790]: time="2025-01-30T13:23:58.107058958Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" Jan 30 13:23:58.107162 containerd[1790]: time="2025-01-30T13:23:58.107130298Z" level=info msg="TearDown network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" successfully" Jan 30 13:23:58.107162 containerd[1790]: time="2025-01-30T13:23:58.107136101Z" level=info msg="StopPodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" returns successfully" Jan 30 13:23:58.107306 containerd[1790]: time="2025-01-30T13:23:58.107276984Z" level=info msg="RemovePodSandbox for \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" Jan 30 13:23:58.107306 containerd[1790]: time="2025-01-30T13:23:58.107304468Z" level=info msg="Forcibly stopping sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\"" Jan 30 13:23:58.107371 containerd[1790]: time="2025-01-30T13:23:58.107356183Z" level=info msg="TearDown network for sandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" successfully" Jan 30 13:23:58.108484 containerd[1790]: time="2025-01-30T13:23:58.108441948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.108484 containerd[1790]: time="2025-01-30T13:23:58.108475715Z" level=info msg="RemovePodSandbox \"3a841e892d2bb8abaa886a760bc061bb60ecb82d4d413872d01ec533dcc9daf9\" returns successfully" Jan 30 13:23:58.108727 containerd[1790]: time="2025-01-30T13:23:58.108688347Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\"" Jan 30 13:23:58.108759 containerd[1790]: time="2025-01-30T13:23:58.108727894Z" level=info msg="TearDown network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" successfully" Jan 30 13:23:58.108759 containerd[1790]: time="2025-01-30T13:23:58.108734058Z" level=info msg="StopPodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" returns successfully" Jan 30 13:23:58.108861 containerd[1790]: time="2025-01-30T13:23:58.108822799Z" level=info msg="RemovePodSandbox for \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\"" Jan 30 13:23:58.108861 containerd[1790]: time="2025-01-30T13:23:58.108832421Z" level=info msg="Forcibly stopping sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\"" Jan 30 13:23:58.108928 containerd[1790]: time="2025-01-30T13:23:58.108885964Z" level=info msg="TearDown network for sandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" successfully" Jan 30 13:23:58.110143 containerd[1790]: time="2025-01-30T13:23:58.110081619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.110143 containerd[1790]: time="2025-01-30T13:23:58.110099355Z" level=info msg="RemovePodSandbox \"e1ec632f17093a060564964259b9b8ffbd1adbadfd2886d631be9d6af60bf772\" returns successfully" Jan 30 13:23:58.110359 containerd[1790]: time="2025-01-30T13:23:58.110286342Z" level=info msg="StopPodSandbox for \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\"" Jan 30 13:23:58.110383 containerd[1790]: time="2025-01-30T13:23:58.110373200Z" level=info msg="TearDown network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" successfully" Jan 30 13:23:58.110383 containerd[1790]: time="2025-01-30T13:23:58.110379780Z" level=info msg="StopPodSandbox for \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" returns successfully" Jan 30 13:23:58.110530 containerd[1790]: time="2025-01-30T13:23:58.110475901Z" level=info msg="RemovePodSandbox for \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\"" Jan 30 13:23:58.110530 containerd[1790]: time="2025-01-30T13:23:58.110491622Z" level=info msg="Forcibly stopping sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\"" Jan 30 13:23:58.110599 containerd[1790]: time="2025-01-30T13:23:58.110536969Z" level=info msg="TearDown network for sandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" successfully" Jan 30 13:23:58.111671 containerd[1790]: time="2025-01-30T13:23:58.111640834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.111725 containerd[1790]: time="2025-01-30T13:23:58.111676649Z" level=info msg="RemovePodSandbox \"00c36f7d730ba830fd3a127b34ab34c97dd9993cb50baa4c77a29f216def50c7\" returns successfully" Jan 30 13:23:58.111926 containerd[1790]: time="2025-01-30T13:23:58.111888171Z" level=info msg="StopPodSandbox for \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\"" Jan 30 13:23:58.111968 containerd[1790]: time="2025-01-30T13:23:58.111948133Z" level=info msg="TearDown network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\" successfully" Jan 30 13:23:58.111968 containerd[1790]: time="2025-01-30T13:23:58.111954171Z" level=info msg="StopPodSandbox for \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\" returns successfully" Jan 30 13:23:58.112196 containerd[1790]: time="2025-01-30T13:23:58.112156448Z" level=info msg="RemovePodSandbox for \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\"" Jan 30 13:23:58.112196 containerd[1790]: time="2025-01-30T13:23:58.112168021Z" level=info msg="Forcibly stopping sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\"" Jan 30 13:23:58.112275 containerd[1790]: time="2025-01-30T13:23:58.112234392Z" level=info msg="TearDown network for sandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\" successfully" Jan 30 13:23:58.113352 containerd[1790]: time="2025-01-30T13:23:58.113341965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.113395 containerd[1790]: time="2025-01-30T13:23:58.113359373Z" level=info msg="RemovePodSandbox \"3852da042d95afcd20a1f6e3440356fa428f1503246f460b6a0b702d0d6d7eb5\" returns successfully" Jan 30 13:23:58.113597 containerd[1790]: time="2025-01-30T13:23:58.113551973Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:58.113678 containerd[1790]: time="2025-01-30T13:23:58.113649135Z" level=info msg="TearDown network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" successfully" Jan 30 13:23:58.113678 containerd[1790]: time="2025-01-30T13:23:58.113668976Z" level=info msg="StopPodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" returns successfully" Jan 30 13:23:58.113903 containerd[1790]: time="2025-01-30T13:23:58.113861133Z" level=info msg="RemovePodSandbox for \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:58.113903 containerd[1790]: time="2025-01-30T13:23:58.113871818Z" level=info msg="Forcibly stopping sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\"" Jan 30 13:23:58.113994 containerd[1790]: time="2025-01-30T13:23:58.113942205Z" level=info msg="TearDown network for sandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" successfully" Jan 30 13:23:58.115156 containerd[1790]: time="2025-01-30T13:23:58.115113274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.115156 containerd[1790]: time="2025-01-30T13:23:58.115129231Z" level=info msg="RemovePodSandbox \"5d14e83ac30a227e52aa66da87664b7d64b42ed526129b45288a9432b4d0793c\" returns successfully" Jan 30 13:23:58.115341 containerd[1790]: time="2025-01-30T13:23:58.115304505Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" Jan 30 13:23:58.115375 containerd[1790]: time="2025-01-30T13:23:58.115361976Z" level=info msg="TearDown network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" successfully" Jan 30 13:23:58.115375 containerd[1790]: time="2025-01-30T13:23:58.115368450Z" level=info msg="StopPodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" returns successfully" Jan 30 13:23:58.115481 containerd[1790]: time="2025-01-30T13:23:58.115469955Z" level=info msg="RemovePodSandbox for \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" Jan 30 13:23:58.115505 containerd[1790]: time="2025-01-30T13:23:58.115489899Z" level=info msg="Forcibly stopping sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\"" Jan 30 13:23:58.115602 containerd[1790]: time="2025-01-30T13:23:58.115523658Z" level=info msg="TearDown network for sandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" successfully" Jan 30 13:23:58.116718 containerd[1790]: time="2025-01-30T13:23:58.116678148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.116718 containerd[1790]: time="2025-01-30T13:23:58.116694886Z" level=info msg="RemovePodSandbox \"59f97021a99f1aaf35f01bb041ef4aeb3b47b1fa31e640046bb5452933ee7943\" returns successfully" Jan 30 13:23:58.116988 containerd[1790]: time="2025-01-30T13:23:58.116952092Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\"" Jan 30 13:23:58.117017 containerd[1790]: time="2025-01-30T13:23:58.116989774Z" level=info msg="TearDown network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" successfully" Jan 30 13:23:58.117017 containerd[1790]: time="2025-01-30T13:23:58.116995745Z" level=info msg="StopPodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" returns successfully" Jan 30 13:23:58.117161 containerd[1790]: time="2025-01-30T13:23:58.117126336Z" level=info msg="RemovePodSandbox for \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\"" Jan 30 13:23:58.117161 containerd[1790]: time="2025-01-30T13:23:58.117137312Z" level=info msg="Forcibly stopping sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\"" Jan 30 13:23:58.117257 containerd[1790]: time="2025-01-30T13:23:58.117189075Z" level=info msg="TearDown network for sandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" successfully" Jan 30 13:23:58.118460 containerd[1790]: time="2025-01-30T13:23:58.118420523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.118460 containerd[1790]: time="2025-01-30T13:23:58.118438744Z" level=info msg="RemovePodSandbox \"83b10095abeca1141d59d9c1e25594f19b56efa730512899ad42b1a65f5b3a51\" returns successfully" Jan 30 13:23:58.118714 containerd[1790]: time="2025-01-30T13:23:58.118677127Z" level=info msg="StopPodSandbox for \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\"" Jan 30 13:23:58.118788 containerd[1790]: time="2025-01-30T13:23:58.118758007Z" level=info msg="TearDown network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" successfully" Jan 30 13:23:58.118788 containerd[1790]: time="2025-01-30T13:23:58.118764628Z" level=info msg="StopPodSandbox for \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" returns successfully" Jan 30 13:23:58.119012 containerd[1790]: time="2025-01-30T13:23:58.118969296Z" level=info msg="RemovePodSandbox for \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\"" Jan 30 13:23:58.119012 containerd[1790]: time="2025-01-30T13:23:58.118982129Z" level=info msg="Forcibly stopping sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\"" Jan 30 13:23:58.119062 containerd[1790]: time="2025-01-30T13:23:58.119013734Z" level=info msg="TearDown network for sandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" successfully" Jan 30 13:23:58.120364 containerd[1790]: time="2025-01-30T13:23:58.120324961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.120364 containerd[1790]: time="2025-01-30T13:23:58.120342148Z" level=info msg="RemovePodSandbox \"66f88cc78b2e66c7d30ab0b3951ab1b48a746a48fe64f2ec408ca68348db1661\" returns successfully" Jan 30 13:23:58.120550 containerd[1790]: time="2025-01-30T13:23:58.120493244Z" level=info msg="StopPodSandbox for \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\"" Jan 30 13:23:58.120610 containerd[1790]: time="2025-01-30T13:23:58.120553190Z" level=info msg="TearDown network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\" successfully" Jan 30 13:23:58.120610 containerd[1790]: time="2025-01-30T13:23:58.120559833Z" level=info msg="StopPodSandbox for \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\" returns successfully" Jan 30 13:23:58.120801 containerd[1790]: time="2025-01-30T13:23:58.120742199Z" level=info msg="RemovePodSandbox for \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\"" Jan 30 13:23:58.120801 containerd[1790]: time="2025-01-30T13:23:58.120773365Z" level=info msg="Forcibly stopping sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\"" Jan 30 13:23:58.120898 containerd[1790]: time="2025-01-30T13:23:58.120853186Z" level=info msg="TearDown network for sandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\" successfully" Jan 30 13:23:58.122139 containerd[1790]: time="2025-01-30T13:23:58.122099485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.122139 containerd[1790]: time="2025-01-30T13:23:58.122116978Z" level=info msg="RemovePodSandbox \"5ab3ca79b788c7ec2bc05c10a4c75acf80bad1758209339f301af1d54047c520\" returns successfully" Jan 30 13:23:58.122404 containerd[1790]: time="2025-01-30T13:23:58.122365345Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:58.122434 containerd[1790]: time="2025-01-30T13:23:58.122404855Z" level=info msg="TearDown network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" successfully" Jan 30 13:23:58.122434 containerd[1790]: time="2025-01-30T13:23:58.122411098Z" level=info msg="StopPodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" returns successfully" Jan 30 13:23:58.122588 containerd[1790]: time="2025-01-30T13:23:58.122540685Z" level=info msg="RemovePodSandbox for \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:58.122588 containerd[1790]: time="2025-01-30T13:23:58.122551997Z" level=info msg="Forcibly stopping sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\"" Jan 30 13:23:58.122675 containerd[1790]: time="2025-01-30T13:23:58.122641369Z" level=info msg="TearDown network for sandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" successfully" Jan 30 13:23:58.123808 containerd[1790]: time="2025-01-30T13:23:58.123748744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.123808 containerd[1790]: time="2025-01-30T13:23:58.123766192Z" level=info msg="RemovePodSandbox \"5e1cd410e411b3091d595be929ff17ef137d05c6640faa32c27e91ba7bb80a38\" returns successfully" Jan 30 13:23:58.124043 containerd[1790]: time="2025-01-30T13:23:58.123996984Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" Jan 30 13:23:58.124086 containerd[1790]: time="2025-01-30T13:23:58.124067847Z" level=info msg="TearDown network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" successfully" Jan 30 13:23:58.124086 containerd[1790]: time="2025-01-30T13:23:58.124074024Z" level=info msg="StopPodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" returns successfully" Jan 30 13:23:58.124267 containerd[1790]: time="2025-01-30T13:23:58.124226563Z" level=info msg="RemovePodSandbox for \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" Jan 30 13:23:58.124267 containerd[1790]: time="2025-01-30T13:23:58.124238447Z" level=info msg="Forcibly stopping sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\"" Jan 30 13:23:58.124348 containerd[1790]: time="2025-01-30T13:23:58.124287362Z" level=info msg="TearDown network for sandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" successfully" Jan 30 13:23:58.125515 containerd[1790]: time="2025-01-30T13:23:58.125456809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.125515 containerd[1790]: time="2025-01-30T13:23:58.125494924Z" level=info msg="RemovePodSandbox \"97b6d89eeb8e03c02be9a604b841a633eead4191c0e5978873e7c2ddcac461c0\" returns successfully" Jan 30 13:23:58.125738 containerd[1790]: time="2025-01-30T13:23:58.125728733Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\"" Jan 30 13:23:58.125831 containerd[1790]: time="2025-01-30T13:23:58.125798365Z" level=info msg="TearDown network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" successfully" Jan 30 13:23:58.125831 containerd[1790]: time="2025-01-30T13:23:58.125805017Z" level=info msg="StopPodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" returns successfully" Jan 30 13:23:58.126084 containerd[1790]: time="2025-01-30T13:23:58.126031417Z" level=info msg="RemovePodSandbox for \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\"" Jan 30 13:23:58.126084 containerd[1790]: time="2025-01-30T13:23:58.126043316Z" level=info msg="Forcibly stopping sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\"" Jan 30 13:23:58.126133 containerd[1790]: time="2025-01-30T13:23:58.126101245Z" level=info msg="TearDown network for sandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" successfully" Jan 30 13:23:58.127308 containerd[1790]: time="2025-01-30T13:23:58.127253771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.127308 containerd[1790]: time="2025-01-30T13:23:58.127291215Z" level=info msg="RemovePodSandbox \"6acbd3c5ca8a019e62af99b82edb81302ad0e173abff77bed54cd3c374476b47\" returns successfully" Jan 30 13:23:58.127448 containerd[1790]: time="2025-01-30T13:23:58.127436550Z" level=info msg="StopPodSandbox for \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\"" Jan 30 13:23:58.127492 containerd[1790]: time="2025-01-30T13:23:58.127476618Z" level=info msg="TearDown network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" successfully" Jan 30 13:23:58.127492 containerd[1790]: time="2025-01-30T13:23:58.127490736Z" level=info msg="StopPodSandbox for \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" returns successfully" Jan 30 13:23:58.127692 containerd[1790]: time="2025-01-30T13:23:58.127653091Z" level=info msg="RemovePodSandbox for \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\"" Jan 30 13:23:58.127692 containerd[1790]: time="2025-01-30T13:23:58.127663917Z" level=info msg="Forcibly stopping sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\"" Jan 30 13:23:58.127824 containerd[1790]: time="2025-01-30T13:23:58.127714969Z" level=info msg="TearDown network for sandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" successfully" Jan 30 13:23:58.128863 containerd[1790]: time="2025-01-30T13:23:58.128815426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.128863 containerd[1790]: time="2025-01-30T13:23:58.128831805Z" level=info msg="RemovePodSandbox \"2aa6b8b4d5c39686cfd8a464b97f8f0601250566bc26122290bcd2fc07a8cb47\" returns successfully" Jan 30 13:23:58.129125 containerd[1790]: time="2025-01-30T13:23:58.129083354Z" level=info msg="StopPodSandbox for \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\"" Jan 30 13:23:58.129125 containerd[1790]: time="2025-01-30T13:23:58.129121016Z" level=info msg="TearDown network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\" successfully" Jan 30 13:23:58.129217 containerd[1790]: time="2025-01-30T13:23:58.129127299Z" level=info msg="StopPodSandbox for \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\" returns successfully" Jan 30 13:23:58.129363 containerd[1790]: time="2025-01-30T13:23:58.129351316Z" level=info msg="RemovePodSandbox for \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\"" Jan 30 13:23:58.129363 containerd[1790]: time="2025-01-30T13:23:58.129362872Z" level=info msg="Forcibly stopping sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\"" Jan 30 13:23:58.129412 containerd[1790]: time="2025-01-30T13:23:58.129395706Z" level=info msg="TearDown network for sandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\" successfully" Jan 30 13:23:58.130765 containerd[1790]: time="2025-01-30T13:23:58.130719439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:23:58.130765 containerd[1790]: time="2025-01-30T13:23:58.130735621Z" level=info msg="RemovePodSandbox \"51bdcba4dae78204389cddd9b366b28c033a33a47742764d88e5c7b90fa1379d\" returns successfully" Jan 30 13:24:17.162806 kubelet[3262]: I0130 13:24:17.162717 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:28:59.245134 systemd[1]: Started sshd@9-147.75.90.199:22-218.92.0.210:35574.service - OpenSSH per-connection server daemon (218.92.0.210:35574). Jan 30 13:28:59.804176 sshd[8025]: Unable to negotiate with 218.92.0.210 port 35574: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Jan 30 13:28:59.806220 systemd[1]: sshd@9-147.75.90.199:22-218.92.0.210:35574.service: Deactivated successfully. Jan 30 13:29:19.690647 systemd[1]: Started sshd@10-147.75.90.199:22-60.13.146.4:17436.service - OpenSSH per-connection server daemon (60.13.146.4:17436). Jan 30 13:30:06.151146 update_engine[1777]: I20250130 13:30:06.150905 1777 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 13:30:06.151146 update_engine[1777]: I20250130 13:30:06.151006 1777 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 13:30:06.152243 update_engine[1777]: I20250130 13:30:06.151412 1777 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 13:30:06.152460 update_engine[1777]: I20250130 13:30:06.152408 1777 omaha_request_params.cc:62] Current group set to beta Jan 30 13:30:06.152726 update_engine[1777]: I20250130 13:30:06.152657 1777 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 13:30:06.152726 update_engine[1777]: I20250130 13:30:06.152694 1777 update_attempter.cc:643] Scheduling an action processor start. Jan 30 13:30:06.153082 update_engine[1777]: I20250130 13:30:06.152733 1777 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:30:06.153082 update_engine[1777]: I20250130 13:30:06.152838 1777 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 13:30:06.153082 update_engine[1777]: I20250130 13:30:06.152999 1777 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:30:06.153082 update_engine[1777]: I20250130 13:30:06.153028 1777 omaha_request_action.cc:272] Request: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: Jan 30 13:30:06.153082 update_engine[1777]: I20250130 13:30:06.153045 1777 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:30:06.154225 locksmithd[1826]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 13:30:06.155810 update_engine[1777]: I20250130 13:30:06.155764 1777 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:30:06.155975 update_engine[1777]: I20250130 13:30:06.155936 1777 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:30:06.156817 update_engine[1777]: E20250130 13:30:06.156775 1777 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:30:06.156817 update_engine[1777]: I20250130 13:30:06.156807 1777 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 13:30:16.096014 update_engine[1777]: I20250130 13:30:16.095863 1777 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:30:16.096999 update_engine[1777]: I20250130 13:30:16.096364 1777 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:30:16.097118 update_engine[1777]: I20250130 13:30:16.096989 1777 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:30:16.097692 update_engine[1777]: E20250130 13:30:16.097575 1777 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:30:16.097901 update_engine[1777]: I20250130 13:30:16.097702 1777 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 13:30:16.827867 systemd[1]: Started sshd@11-147.75.90.199:22-139.178.89.65:52294.service - OpenSSH per-connection server daemon (139.178.89.65:52294). Jan 30 13:30:16.873163 sshd[8209]: Accepted publickey for core from 139.178.89.65 port 52294 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:16.873893 sshd-session[8209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:16.877024 systemd-logind[1772]: New session 12 of user core. Jan 30 13:30:16.887805 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:30:16.994184 sshd[8211]: Connection closed by 139.178.89.65 port 52294 Jan 30 13:30:16.994346 sshd-session[8209]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:16.996239 systemd[1]: sshd@11-147.75.90.199:22-139.178.89.65:52294.service: Deactivated successfully. Jan 30 13:30:16.997127 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:30:16.997482 systemd-logind[1772]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:30:16.998109 systemd-logind[1772]: Removed session 12. Jan 30 13:30:22.025889 systemd[1]: Started sshd@12-147.75.90.199:22-139.178.89.65:59230.service - OpenSSH per-connection server daemon (139.178.89.65:59230). Jan 30 13:30:22.054235 sshd[8236]: Accepted publickey for core from 139.178.89.65 port 59230 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:22.054843 sshd-session[8236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:22.057343 systemd-logind[1772]: New session 13 of user core. Jan 30 13:30:22.073784 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:30:22.159185 sshd[8238]: Connection closed by 139.178.89.65 port 59230 Jan 30 13:30:22.159371 sshd-session[8236]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:22.161067 systemd[1]: sshd@12-147.75.90.199:22-139.178.89.65:59230.service: Deactivated successfully. Jan 30 13:30:22.162034 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:30:22.162775 systemd-logind[1772]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:30:22.163382 systemd-logind[1772]: Removed session 13. Jan 30 13:30:26.090469 update_engine[1777]: I20250130 13:30:26.090323 1777 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:30:26.091335 update_engine[1777]: I20250130 13:30:26.090868 1777 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:30:26.091495 update_engine[1777]: I20250130 13:30:26.091419 1777 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:30:26.091915 update_engine[1777]: E20250130 13:30:26.091801 1777 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:30:26.092094 update_engine[1777]: I20250130 13:30:26.091938 1777 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 13:30:27.176857 systemd[1]: Started sshd@13-147.75.90.199:22-139.178.89.65:59234.service - OpenSSH per-connection server daemon (139.178.89.65:59234). Jan 30 13:30:27.212541 sshd[8283]: Accepted publickey for core from 139.178.89.65 port 59234 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:27.213263 sshd-session[8283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:27.215952 systemd-logind[1772]: New session 14 of user core. Jan 30 13:30:27.235750 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:30:27.333464 sshd[8285]: Connection closed by 139.178.89.65 port 59234 Jan 30 13:30:27.334320 sshd-session[8283]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:27.360156 systemd[1]: sshd@13-147.75.90.199:22-139.178.89.65:59234.service: Deactivated successfully. Jan 30 13:30:27.364325 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:30:27.367745 systemd-logind[1772]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:30:27.383364 systemd[1]: Started sshd@14-147.75.90.199:22-139.178.89.65:59240.service - OpenSSH per-connection server daemon (139.178.89.65:59240). Jan 30 13:30:27.386021 systemd-logind[1772]: Removed session 14. Jan 30 13:30:27.444512 sshd[8310]: Accepted publickey for core from 139.178.89.65 port 59240 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:27.445108 sshd-session[8310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:27.447810 systemd-logind[1772]: New session 15 of user core. Jan 30 13:30:27.465951 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:30:27.574121 sshd[8312]: Connection closed by 139.178.89.65 port 59240 Jan 30 13:30:27.574282 sshd-session[8310]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:27.591323 systemd[1]: sshd@14-147.75.90.199:22-139.178.89.65:59240.service: Deactivated successfully. Jan 30 13:30:27.592213 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:30:27.592983 systemd-logind[1772]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:30:27.593653 systemd[1]: Started sshd@15-147.75.90.199:22-139.178.89.65:59254.service - OpenSSH per-connection server daemon (139.178.89.65:59254). Jan 30 13:30:27.594261 systemd-logind[1772]: Removed session 15. Jan 30 13:30:27.631835 sshd[8334]: Accepted publickey for core from 139.178.89.65 port 59254 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:27.632611 sshd-session[8334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:27.635495 systemd-logind[1772]: New session 16 of user core. Jan 30 13:30:27.642777 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:30:27.766901 sshd[8336]: Connection closed by 139.178.89.65 port 59254 Jan 30 13:30:27.767050 sshd-session[8334]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:27.768731 systemd[1]: sshd@15-147.75.90.199:22-139.178.89.65:59254.service: Deactivated successfully. Jan 30 13:30:27.769709 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:30:27.770389 systemd-logind[1772]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:30:27.771155 systemd-logind[1772]: Removed session 16. Jan 30 13:30:32.801219 systemd[1]: Started sshd@16-147.75.90.199:22-139.178.89.65:60002.service - OpenSSH per-connection server daemon (139.178.89.65:60002). Jan 30 13:30:32.862558 sshd[8365]: Accepted publickey for core from 139.178.89.65 port 60002 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:32.863225 sshd-session[8365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:32.865912 systemd-logind[1772]: New session 17 of user core. Jan 30 13:30:32.884604 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:30:32.999276 sshd[8367]: Connection closed by 139.178.89.65 port 60002 Jan 30 13:30:32.999472 sshd-session[8365]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:33.001190 systemd[1]: sshd@16-147.75.90.199:22-139.178.89.65:60002.service: Deactivated successfully. Jan 30 13:30:33.002198 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:30:33.002889 systemd-logind[1772]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:30:33.003390 systemd-logind[1772]: Removed session 17. Jan 30 13:30:36.089753 update_engine[1777]: I20250130 13:30:36.089596 1777 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:30:36.090625 update_engine[1777]: I20250130 13:30:36.090152 1777 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:30:36.090761 update_engine[1777]: I20250130 13:30:36.090718 1777 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:30:36.091227 update_engine[1777]: E20250130 13:30:36.091123 1777 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:30:36.091571 update_engine[1777]: I20250130 13:30:36.091228 1777 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:30:36.091571 update_engine[1777]: I20250130 13:30:36.091251 1777 omaha_request_action.cc:617] Omaha request response: Jan 30 13:30:36.091571 update_engine[1777]: E20250130 13:30:36.091413 1777 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 13:30:36.091571 update_engine[1777]: I20250130 13:30:36.091459 1777 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 13:30:36.091571 update_engine[1777]: I20250130 13:30:36.091511 1777 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:30:36.091571 update_engine[1777]: I20250130 13:30:36.091533 1777 update_attempter.cc:306] Processing Done. Jan 30 13:30:36.091571 update_engine[1777]: E20250130 13:30:36.091563 1777 update_attempter.cc:619] Update failed. Jan 30 13:30:36.092200 update_engine[1777]: I20250130 13:30:36.091579 1777 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 13:30:36.092200 update_engine[1777]: I20250130 13:30:36.091595 1777 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 13:30:36.092200 update_engine[1777]: I20250130 13:30:36.091610 1777 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 13:30:36.092200 update_engine[1777]: I20250130 13:30:36.091763 1777 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:30:36.092200 update_engine[1777]: I20250130 13:30:36.091823 1777 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:30:36.092200 update_engine[1777]: I20250130 13:30:36.091841 1777 omaha_request_action.cc:272] Request: Jan 30 13:30:36.092200 update_engine[1777]: Jan 30 13:30:36.092200 update_engine[1777]: Jan 30 13:30:36.092200 update_engine[1777]: Jan 30 13:30:36.092200 update_engine[1777]: Jan 30 13:30:36.092200 update_engine[1777]: Jan 30 13:30:36.092200 update_engine[1777]: Jan 30 13:30:36.092200 update_engine[1777]: I20250130 13:30:36.091858 1777 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.092283 1777 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.092754 1777 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:30:36.093349 update_engine[1777]: E20250130 13:30:36.093017 1777 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.093116 1777 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.093138 1777 omaha_request_action.cc:617] Omaha request response: Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.093154 1777 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.093169 1777 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.093190 1777 update_attempter.cc:306] Processing Done. Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.093216 1777 update_attempter.cc:310] Error event sent. Jan 30 13:30:36.093349 update_engine[1777]: I20250130 13:30:36.093253 1777 update_check_scheduler.cc:74] Next update check in 47m19s Jan 30 13:30:36.094241 locksmithd[1826]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 13:30:36.094241 locksmithd[1826]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 13:30:38.041840 systemd[1]: Started sshd@17-147.75.90.199:22-139.178.89.65:60004.service - OpenSSH per-connection server daemon (139.178.89.65:60004). Jan 30 13:30:38.075553 sshd[8391]: Accepted publickey for core from 139.178.89.65 port 60004 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:38.076228 sshd-session[8391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:38.079092 systemd-logind[1772]: New session 18 of user core. Jan 30 13:30:38.090642 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:30:38.177418 sshd[8393]: Connection closed by 139.178.89.65 port 60004 Jan 30 13:30:38.177642 sshd-session[8391]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:38.196240 systemd[1]: sshd@17-147.75.90.199:22-139.178.89.65:60004.service: Deactivated successfully. Jan 30 13:30:38.197038 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:30:38.197766 systemd-logind[1772]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:30:38.198416 systemd[1]: Started sshd@18-147.75.90.199:22-139.178.89.65:60010.service - OpenSSH per-connection server daemon (139.178.89.65:60010). Jan 30 13:30:38.199039 systemd-logind[1772]: Removed session 18. Jan 30 13:30:38.235576 sshd[8417]: Accepted publickey for core from 139.178.89.65 port 60010 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:38.236318 sshd-session[8417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:38.239360 systemd-logind[1772]: New session 19 of user core. Jan 30 13:30:38.259708 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:30:38.472118 sshd[8419]: Connection closed by 139.178.89.65 port 60010 Jan 30 13:30:38.472981 sshd-session[8417]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:38.489091 systemd[1]: sshd@18-147.75.90.199:22-139.178.89.65:60010.service: Deactivated successfully. Jan 30 13:30:38.493205 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:30:38.496938 systemd-logind[1772]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:30:38.516245 systemd[1]: Started sshd@19-147.75.90.199:22-139.178.89.65:60016.service - OpenSSH per-connection server daemon (139.178.89.65:60016). Jan 30 13:30:38.519010 systemd-logind[1772]: Removed session 19. Jan 30 13:30:38.580383 sshd[8441]: Accepted publickey for core from 139.178.89.65 port 60016 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:38.581166 sshd-session[8441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:38.584116 systemd-logind[1772]: New session 20 of user core. Jan 30 13:30:38.598744 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:30:39.493999 sshd[8445]: Connection closed by 139.178.89.65 port 60016 Jan 30 13:30:39.494234 sshd-session[8441]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:39.508595 systemd[1]: sshd@19-147.75.90.199:22-139.178.89.65:60016.service: Deactivated successfully. Jan 30 13:30:39.509607 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:30:39.510434 systemd-logind[1772]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:30:39.511337 systemd[1]: Started sshd@20-147.75.90.199:22-139.178.89.65:60028.service - OpenSSH per-connection server daemon (139.178.89.65:60028). Jan 30 13:30:39.512095 systemd-logind[1772]: Removed session 20. Jan 30 13:30:39.549623 sshd[8473]: Accepted publickey for core from 139.178.89.65 port 60028 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:39.550520 sshd-session[8473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:39.553752 systemd-logind[1772]: New session 21 of user core. Jan 30 13:30:39.573625 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:30:39.774411 sshd[8479]: Connection closed by 139.178.89.65 port 60028 Jan 30 13:30:39.774811 sshd-session[8473]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:39.791368 systemd[1]: sshd@20-147.75.90.199:22-139.178.89.65:60028.service: Deactivated successfully. Jan 30 13:30:39.792233 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:30:39.792977 systemd-logind[1772]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:30:39.793602 systemd[1]: Started sshd@21-147.75.90.199:22-139.178.89.65:60034.service - OpenSSH per-connection server daemon (139.178.89.65:60034). Jan 30 13:30:39.794065 systemd-logind[1772]: Removed session 21. Jan 30 13:30:39.824377 sshd[8529]: Accepted publickey for core from 139.178.89.65 port 60034 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:39.825023 sshd-session[8529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:39.827939 systemd-logind[1772]: New session 22 of user core. Jan 30 13:30:39.838799 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:30:39.963797 sshd[8532]: Connection closed by 139.178.89.65 port 60034 Jan 30 13:30:39.963965 sshd-session[8529]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:39.965585 systemd[1]: sshd@21-147.75.90.199:22-139.178.89.65:60034.service: Deactivated successfully. Jan 30 13:30:39.966535 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:30:39.967285 systemd-logind[1772]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:30:39.968062 systemd-logind[1772]: Removed session 22. Jan 30 13:30:44.988233 systemd[1]: Started sshd@22-147.75.90.199:22-139.178.89.65:58108.service - OpenSSH per-connection server daemon (139.178.89.65:58108). Jan 30 13:30:45.020936 sshd[8562]: Accepted publickey for core from 139.178.89.65 port 58108 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:45.021530 sshd-session[8562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:45.024170 systemd-logind[1772]: New session 23 of user core. Jan 30 13:30:45.045815 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:30:45.129359 sshd[8564]: Connection closed by 139.178.89.65 port 58108 Jan 30 13:30:45.129529 sshd-session[8562]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:45.131050 systemd[1]: sshd@22-147.75.90.199:22-139.178.89.65:58108.service: Deactivated successfully. Jan 30 13:30:45.131983 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:30:45.132712 systemd-logind[1772]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:30:45.133249 systemd-logind[1772]: Removed session 23. Jan 30 13:30:50.139383 systemd[1]: Started sshd@23-147.75.90.199:22-139.178.89.65:58110.service - OpenSSH per-connection server daemon (139.178.89.65:58110). Jan 30 13:30:50.171627 sshd[8608]: Accepted publickey for core from 139.178.89.65 port 58110 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:50.172262 sshd-session[8608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:50.174960 systemd-logind[1772]: New session 24 of user core. Jan 30 13:30:50.191628 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:30:50.278633 sshd[8610]: Connection closed by 139.178.89.65 port 58110 Jan 30 13:30:50.278795 sshd-session[8608]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:50.280363 systemd[1]: sshd@23-147.75.90.199:22-139.178.89.65:58110.service: Deactivated successfully. Jan 30 13:30:50.281273 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:30:50.282004 systemd-logind[1772]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:30:50.282473 systemd-logind[1772]: Removed session 24. Jan 30 13:30:55.307683 systemd[1]: Started sshd@24-147.75.90.199:22-139.178.89.65:57744.service - OpenSSH per-connection server daemon (139.178.89.65:57744). Jan 30 13:30:55.341275 sshd[8630]: Accepted publickey for core from 139.178.89.65 port 57744 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:30:55.341953 sshd-session[8630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:55.344850 systemd-logind[1772]: New session 25 of user core. Jan 30 13:30:55.356760 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:30:55.445930 sshd[8632]: Connection closed by 139.178.89.65 port 57744 Jan 30 13:30:55.446130 sshd-session[8630]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:55.448417 systemd[1]: sshd@24-147.75.90.199:22-139.178.89.65:57744.service: Deactivated successfully. Jan 30 13:30:55.449469 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:30:55.449989 systemd-logind[1772]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:30:55.450695 systemd-logind[1772]: Removed session 25.