Jan 13 21:37:55.466396 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 13 21:37:55.466410 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 21:37:55.466417 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:37:55.466422 kernel: BIOS-provided physical RAM map: Jan 13 21:37:55.466426 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 21:37:55.466430 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 21:37:55.466435 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 21:37:55.466439 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 21:37:55.466443 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 21:37:55.466447 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Jan 13 21:37:55.466451 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Jan 13 21:37:55.466456 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Jan 13 21:37:55.466460 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Jan 13 21:37:55.466464 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 13 21:37:55.466470 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 13 21:37:55.466474 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 13 21:37:55.466480 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 13 21:37:55.466484 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 13 21:37:55.466489 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 13 21:37:55.466493 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 21:37:55.466498 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 21:37:55.466502 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 21:37:55.466507 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 21:37:55.466511 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 21:37:55.466516 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 13 21:37:55.466521 kernel: NX (Execute Disable) protection: active Jan 13 21:37:55.466525 kernel: APIC: Static calls initialized Jan 13 21:37:55.466530 kernel: SMBIOS 3.2.1 present. Jan 13 21:37:55.466535 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 13 21:37:55.466540 kernel: tsc: Detected 3400.000 MHz processor Jan 13 21:37:55.466544 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 21:37:55.466549 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:37:55.466554 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:37:55.466559 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 13 21:37:55.466564 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 21:37:55.466568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:37:55.466573 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 13 21:37:55.466579 kernel: Using GB pages for direct mapping Jan 13 21:37:55.466584 kernel: ACPI: Early table checksum verification disabled Jan 13 21:37:55.466589 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 21:37:55.466595 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 21:37:55.466600 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 13 21:37:55.466605 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 21:37:55.466610 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 13 21:37:55.466616 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 13 21:37:55.466621 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 13 21:37:55.466626 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 21:37:55.466631 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 21:37:55.466636 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 21:37:55.466641 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 21:37:55.466646 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 21:37:55.466652 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 21:37:55.466657 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466662 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 21:37:55.466667 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 21:37:55.466672 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466677 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466682 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 21:37:55.466687 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 21:37:55.466692 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466698 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466703 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 21:37:55.466708 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 13 21:37:55.466713 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 21:37:55.466718 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 21:37:55.466723 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 21:37:55.466728 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 13 21:37:55.466732 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 21:37:55.466738 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 21:37:55.466743 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 21:37:55.466748 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 21:37:55.466753 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 21:37:55.466758 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 13 21:37:55.466763 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 13 21:37:55.466768 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 13 21:37:55.466773 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 13 21:37:55.466778 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 13 21:37:55.466784 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 13 21:37:55.466789 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 13 21:37:55.466794 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 13 21:37:55.466799 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 13 21:37:55.466804 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 13 21:37:55.466809 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 13 21:37:55.466814 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 13 21:37:55.466819 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 13 21:37:55.466823 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 13 21:37:55.466829 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 13 21:37:55.466834 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 13 21:37:55.466839 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 13 21:37:55.466844 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 13 21:37:55.466849 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 13 21:37:55.466854 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 13 21:37:55.466859 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 13 21:37:55.466864 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 13 21:37:55.466869 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 13 21:37:55.466874 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 13 21:37:55.466880 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 13 21:37:55.466885 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 13 21:37:55.466889 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 13 21:37:55.466894 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 13 21:37:55.466899 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 13 21:37:55.466904 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 13 21:37:55.466909 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 13 21:37:55.466914 kernel: No NUMA configuration found Jan 13 21:37:55.466919 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 13 21:37:55.466925 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 13 21:37:55.466930 kernel: Zone ranges: Jan 13 21:37:55.466935 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:37:55.466940 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:37:55.466945 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 13 21:37:55.466950 kernel: Movable zone start for each node Jan 13 21:37:55.466955 kernel: Early memory node ranges Jan 13 21:37:55.466960 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 21:37:55.466964 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 21:37:55.466970 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Jan 13 21:37:55.466975 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Jan 13 21:37:55.466980 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 13 21:37:55.466985 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 13 21:37:55.466994 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 13 21:37:55.467000 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 13 21:37:55.467007 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:37:55.467012 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 21:37:55.467037 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 21:37:55.467042 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 21:37:55.467048 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 13 21:37:55.467053 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 13 21:37:55.467072 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 13 21:37:55.467077 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 13 21:37:55.467083 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 21:37:55.467088 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 21:37:55.467093 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 21:37:55.467099 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 21:37:55.467105 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 21:37:55.467110 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 21:37:55.467115 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 21:37:55.467120 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 21:37:55.467126 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 21:37:55.467131 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 21:37:55.467136 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 21:37:55.467141 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 21:37:55.467146 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 21:37:55.467153 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 21:37:55.467158 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 21:37:55.467163 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 21:37:55.467169 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 21:37:55.467174 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 21:37:55.467179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:37:55.467185 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:37:55.467190 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:37:55.467195 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:37:55.467202 kernel: TSC deadline timer available Jan 13 21:37:55.467207 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 21:37:55.467212 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 13 21:37:55.467218 kernel: Booting paravirtualized kernel on bare hardware Jan 13 21:37:55.467223 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:37:55.467228 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 21:37:55.467234 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 21:37:55.467239 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 21:37:55.467244 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 21:37:55.467251 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:37:55.467257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:37:55.467262 kernel: random: crng init done Jan 13 21:37:55.467267 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 21:37:55.467273 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 21:37:55.467278 kernel: Fallback order for Node 0: 0 Jan 13 21:37:55.467283 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 13 21:37:55.467289 kernel: Policy zone: Normal Jan 13 21:37:55.467295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:37:55.467300 kernel: software IO TLB: area num 16. Jan 13 21:37:55.467306 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 732412K reserved, 0K cma-reserved) Jan 13 21:37:55.467311 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 21:37:55.467316 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 21:37:55.467322 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:37:55.467327 kernel: Dynamic Preempt: voluntary Jan 13 21:37:55.467332 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:37:55.467339 kernel: rcu: RCU event tracing is enabled. Jan 13 21:37:55.467345 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 21:37:55.467350 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:37:55.467355 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:37:55.467361 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:37:55.467366 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:37:55.467371 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 21:37:55.467377 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 21:37:55.467382 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:37:55.467387 kernel: Console: colour VGA+ 80x25 Jan 13 21:37:55.467394 kernel: printk: console [tty0] enabled Jan 13 21:37:55.467399 kernel: printk: console [ttyS1] enabled Jan 13 21:37:55.467404 kernel: ACPI: Core revision 20230628 Jan 13 21:37:55.467410 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 13 21:37:55.467415 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:37:55.467420 kernel: DMAR: Host address width 39 Jan 13 21:37:55.467426 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 21:37:55.467431 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 21:37:55.467436 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 13 21:37:55.467442 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 13 21:37:55.467448 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 21:37:55.467453 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 21:37:55.467458 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 21:37:55.467464 kernel: x2apic enabled Jan 13 21:37:55.467469 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 21:37:55.467474 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 21:37:55.467480 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 21:37:55.467485 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 21:37:55.467491 kernel: process: using mwait in idle threads Jan 13 21:37:55.467497 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:37:55.467502 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:37:55.467507 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:37:55.467512 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:37:55.467517 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:37:55.467523 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 21:37:55.467528 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:37:55.467533 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 21:37:55.467538 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 21:37:55.467544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:37:55.467550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:37:55.467555 kernel: TAA: Mitigation: TSX disabled Jan 13 21:37:55.467561 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 21:37:55.467566 kernel: SRBDS: Mitigation: Microcode Jan 13 21:37:55.467571 kernel: GDS: Mitigation: Microcode Jan 13 21:37:55.467576 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:37:55.467582 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:37:55.467587 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:37:55.467592 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 21:37:55.467598 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 21:37:55.467603 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:37:55.467609 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 21:37:55.467614 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 21:37:55.467620 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 21:37:55.467625 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:37:55.467630 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:37:55.467635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:37:55.467641 kernel: landlock: Up and running. Jan 13 21:37:55.467646 kernel: SELinux: Initializing. Jan 13 21:37:55.467651 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.467656 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.467662 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 21:37:55.467668 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:37:55.467674 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:37:55.467679 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:37:55.467684 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 21:37:55.467690 kernel: ... version: 4 Jan 13 21:37:55.467695 kernel: ... bit width: 48 Jan 13 21:37:55.467700 kernel: ... generic registers: 4 Jan 13 21:37:55.467706 kernel: ... value mask: 0000ffffffffffff Jan 13 21:37:55.467711 kernel: ... max period: 00007fffffffffff Jan 13 21:37:55.467717 kernel: ... fixed-purpose events: 3 Jan 13 21:37:55.467722 kernel: ... event mask: 000000070000000f Jan 13 21:37:55.467728 kernel: signal: max sigframe size: 2032 Jan 13 21:37:55.467733 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 21:37:55.467738 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:37:55.467744 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:37:55.467749 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 21:37:55.467754 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:37:55.467760 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:37:55.467766 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 21:37:55.467772 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:37:55.467777 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 21:37:55.467782 kernel: smpboot: Max logical packages: 1 Jan 13 21:37:55.467788 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 21:37:55.467793 kernel: devtmpfs: initialized Jan 13 21:37:55.467798 kernel: x86/mm: Memory block size: 128MB Jan 13 21:37:55.467804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Jan 13 21:37:55.467809 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 13 21:37:55.467815 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:37:55.467821 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 21:37:55.467826 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:37:55.467831 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:37:55.467836 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:37:55.467842 kernel: audit: type=2000 audit(1736804270.042:1): state=initialized audit_enabled=0 res=1 Jan 13 21:37:55.467847 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:37:55.467852 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:37:55.467858 kernel: cpuidle: using governor menu Jan 13 21:37:55.467864 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:37:55.467869 kernel: dca service started, version 1.12.1 Jan 13 21:37:55.467874 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 21:37:55.467880 kernel: PCI: Using configuration type 1 for base access Jan 13 21:37:55.467885 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 21:37:55.467890 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:37:55.467896 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:37:55.467901 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:37:55.467906 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:37:55.467912 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:37:55.467918 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:37:55.467923 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:37:55.467928 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:37:55.467934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:37:55.467939 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 21:37:55.467944 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467950 kernel: ACPI: SSDT 0xFFFF8B6381607000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 21:37:55.467955 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467961 kernel: ACPI: SSDT 0xFFFF8B63815F9800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 21:37:55.467967 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467972 kernel: ACPI: SSDT 0xFFFF8B63815E4C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 21:37:55.467977 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467982 kernel: ACPI: SSDT 0xFFFF8B63815FC800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 21:37:55.467987 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467993 kernel: ACPI: SSDT 0xFFFF8B6381608000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 21:37:55.467998 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.468005 kernel: ACPI: SSDT 0xFFFF8B6381E9A400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 21:37:55.468011 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 21:37:55.468016 kernel: ACPI: Interpreter enabled Jan 13 21:37:55.468042 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:37:55.468047 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:37:55.468068 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 21:37:55.468074 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 21:37:55.468079 kernel: HEST: Table parsing has been initialized. Jan 13 21:37:55.468084 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 21:37:55.468089 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:37:55.468096 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:37:55.468101 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 21:37:55.468106 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 21:37:55.468112 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 21:37:55.468117 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 21:37:55.468123 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 21:37:55.468128 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 21:37:55.468133 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 21:37:55.468138 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 21:37:55.468145 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 21:37:55.468150 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 21:37:55.468156 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 21:37:55.468161 kernel: ACPI: \PIN_: New power resource Jan 13 21:37:55.468166 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 21:37:55.468238 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:37:55.468290 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 21:37:55.468337 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 21:37:55.468346 kernel: PCI host bridge to bus 0000:00 Jan 13 21:37:55.468395 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:37:55.468438 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:37:55.468478 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:37:55.468520 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 13 21:37:55.468560 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 21:37:55.468600 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 21:37:55.468657 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 21:37:55.468713 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 21:37:55.468761 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.468811 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 21:37:55.468858 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 13 21:37:55.468907 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 21:37:55.468956 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 13 21:37:55.469009 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 21:37:55.469087 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 13 21:37:55.469133 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 21:37:55.469182 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 21:37:55.469228 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 13 21:37:55.469276 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 13 21:37:55.469325 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 21:37:55.469372 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 21:37:55.469423 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 21:37:55.469469 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 21:37:55.469518 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 21:37:55.469568 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 13 21:37:55.469617 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 21:37:55.469674 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 21:37:55.469725 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 13 21:37:55.469770 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 21:37:55.469821 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 21:37:55.469867 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 13 21:37:55.469917 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 21:37:55.469966 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 21:37:55.470034 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 13 21:37:55.470099 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 13 21:37:55.470145 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 13 21:37:55.470194 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 13 21:37:55.470240 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 13 21:37:55.470288 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 13 21:37:55.470335 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 21:37:55.470386 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 21:37:55.470434 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470491 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 21:37:55.470537 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470588 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 21:37:55.470634 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470685 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 21:37:55.470731 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470784 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 13 21:37:55.470830 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470881 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 21:37:55.470927 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 21:37:55.470978 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 21:37:55.471052 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 21:37:55.471117 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 13 21:37:55.471163 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 21:37:55.471216 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 21:37:55.471263 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 21:37:55.471318 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 21:37:55.471367 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 21:37:55.471417 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 13 21:37:55.471465 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 13 21:37:55.471513 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 21:37:55.471561 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 21:37:55.471613 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 21:37:55.471661 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 21:37:55.471709 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 13 21:37:55.471759 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 13 21:37:55.471807 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 21:37:55.471855 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 21:37:55.471903 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:37:55.471949 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 21:37:55.471996 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 21:37:55.472080 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 21:37:55.472133 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 13 21:37:55.472185 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 13 21:37:55.472233 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 13 21:37:55.472280 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 21:37:55.472328 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 13 21:37:55.472376 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.472423 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 21:37:55.472470 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 21:37:55.472519 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 21:37:55.472572 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 21:37:55.472620 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 21:37:55.472668 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 13 21:37:55.472718 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 21:37:55.472764 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 13 21:37:55.472812 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.472860 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 21:37:55.472907 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 21:37:55.472954 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 21:37:55.473002 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 21:37:55.473108 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 21:37:55.473156 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 13 21:37:55.473204 kernel: pci 0000:06:00.0: supports D1 D2 Jan 13 21:37:55.473251 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:37:55.473301 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 21:37:55.473347 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.473394 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.473447 kernel: pci_bus 0000:07: extended config space not accessible Jan 13 21:37:55.473502 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 21:37:55.473551 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 13 21:37:55.473604 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 13 21:37:55.473675 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 21:37:55.473738 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:37:55.473788 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 21:37:55.473836 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:37:55.473901 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 21:37:55.473963 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.474029 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.474054 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 21:37:55.474060 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 21:37:55.474065 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 21:37:55.474071 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 21:37:55.474077 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 21:37:55.474082 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 21:37:55.474088 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 21:37:55.474094 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 21:37:55.474099 kernel: iommu: Default domain type: Translated Jan 13 21:37:55.474106 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:37:55.474112 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:37:55.474117 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:37:55.474123 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 21:37:55.474129 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Jan 13 21:37:55.474134 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 13 21:37:55.474140 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 13 21:37:55.474145 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 13 21:37:55.474151 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 13 21:37:55.474203 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 13 21:37:55.474252 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 13 21:37:55.474302 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:37:55.474310 kernel: vgaarb: loaded Jan 13 21:37:55.474316 kernel: clocksource: Switched to clocksource tsc-early Jan 13 21:37:55.474322 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:37:55.474328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:37:55.474333 kernel: pnp: PnP ACPI init Jan 13 21:37:55.474383 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 21:37:55.474432 kernel: pnp 00:02: [dma 0 disabled] Jan 13 21:37:55.474479 kernel: pnp 00:03: [dma 0 disabled] Jan 13 21:37:55.474527 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 21:37:55.474570 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 21:37:55.474615 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 21:37:55.474661 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 21:37:55.474706 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 21:37:55.474749 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 21:37:55.474791 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 21:37:55.474837 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 21:37:55.474879 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 21:37:55.474922 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 21:37:55.474965 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 21:37:55.475078 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 21:37:55.475121 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 21:37:55.475163 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 21:37:55.475204 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 21:37:55.475247 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 21:37:55.475290 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 21:37:55.475332 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 21:37:55.475381 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 21:37:55.475390 kernel: pnp: PnP ACPI: found 10 devices Jan 13 21:37:55.475396 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:37:55.475401 kernel: NET: Registered PF_INET protocol family Jan 13 21:37:55.475407 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:37:55.475413 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 21:37:55.475419 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:37:55.475424 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:37:55.475432 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:37:55.475437 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 21:37:55.475443 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.475449 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.475455 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:37:55.475461 kernel: NET: Registered PF_XDP protocol family Jan 13 21:37:55.475509 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 13 21:37:55.475558 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 13 21:37:55.475607 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 13 21:37:55.475658 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475706 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475755 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475801 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475849 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:37:55.475895 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 21:37:55.475941 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 21:37:55.475991 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 21:37:55.476076 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 21:37:55.476123 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 21:37:55.476169 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 21:37:55.476216 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 21:37:55.476265 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 21:37:55.476312 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 21:37:55.476374 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 21:37:55.476436 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 21:37:55.476483 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.476530 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.476576 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 21:37:55.476623 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.476670 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.476715 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 21:37:55.476757 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:37:55.476798 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:37:55.476840 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:37:55.476880 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 13 21:37:55.476921 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 21:37:55.476968 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 13 21:37:55.477016 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 21:37:55.477098 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 13 21:37:55.477141 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 13 21:37:55.477187 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 21:37:55.477230 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 13 21:37:55.477276 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 13 21:37:55.477322 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 13 21:37:55.477367 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 21:37:55.477412 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 13 21:37:55.477420 kernel: PCI: CLS 64 bytes, default 64 Jan 13 21:37:55.477426 kernel: DMAR: No ATSR found Jan 13 21:37:55.477432 kernel: DMAR: No SATC found Jan 13 21:37:55.477438 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 21:37:55.477483 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 13 21:37:55.477532 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 13 21:37:55.477579 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 13 21:37:55.477625 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 13 21:37:55.477672 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 13 21:37:55.477719 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 13 21:37:55.477795 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 13 21:37:55.477840 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 13 21:37:55.477887 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 13 21:37:55.477933 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 13 21:37:55.477983 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 13 21:37:55.478061 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 13 21:37:55.478108 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 13 21:37:55.478153 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 13 21:37:55.478200 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 13 21:37:55.478246 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 13 21:37:55.478292 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 13 21:37:55.478337 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 13 21:37:55.478386 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 13 21:37:55.478435 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 13 21:37:55.478480 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 13 21:37:55.478528 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 13 21:37:55.478576 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 13 21:37:55.478624 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 13 21:37:55.478671 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 21:37:55.478719 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 13 21:37:55.478770 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 13 21:37:55.478778 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 21:37:55.478784 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:37:55.478790 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 13 21:37:55.478796 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 13 21:37:55.478801 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 21:37:55.478807 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 21:37:55.478813 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 21:37:55.478863 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 21:37:55.478873 kernel: Initialise system trusted keyrings Jan 13 21:37:55.478878 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 21:37:55.478884 kernel: Key type asymmetric registered Jan 13 21:37:55.478890 kernel: Asymmetric key parser 'x509' registered Jan 13 21:37:55.478895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:37:55.478901 kernel: io scheduler mq-deadline registered Jan 13 21:37:55.478907 kernel: io scheduler kyber registered Jan 13 21:37:55.478912 kernel: io scheduler bfq registered Jan 13 21:37:55.478960 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 13 21:37:55.479009 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 13 21:37:55.479094 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 13 21:37:55.479158 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 13 21:37:55.479218 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 13 21:37:55.479264 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 13 21:37:55.479316 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 21:37:55.479326 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 21:37:55.479332 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 21:37:55.479339 kernel: pstore: Using crash dump compression: deflate Jan 13 21:37:55.479344 kernel: pstore: Registered erst as persistent store backend Jan 13 21:37:55.479350 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:37:55.479356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:37:55.479362 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:37:55.479367 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:37:55.479373 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 13 21:37:55.479424 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 21:37:55.479433 kernel: i8042: PNP: No PS/2 controller found. Jan 13 21:37:55.479476 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 21:37:55.479519 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 21:37:55.479562 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T21:37:54 UTC (1736804274) Jan 13 21:37:55.479605 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 21:37:55.479613 kernel: intel_pstate: Intel P-state driver initializing Jan 13 21:37:55.479619 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 21:37:55.479626 kernel: intel_pstate: HWP enabled Jan 13 21:37:55.479632 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:37:55.479638 kernel: Segment Routing with IPv6 Jan 13 21:37:55.479643 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:37:55.479649 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:37:55.479655 kernel: Key type dns_resolver registered Jan 13 21:37:55.479660 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 21:37:55.479666 kernel: IPI shorthand broadcast: enabled Jan 13 21:37:55.479672 kernel: sched_clock: Marking stable (2491000587, 1449300293)->(4499074336, -558773456) Jan 13 21:37:55.479678 kernel: registered taskstats version 1 Jan 13 21:37:55.479684 kernel: Loading compiled-in X.509 certificates Jan 13 21:37:55.479690 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 21:37:55.479695 kernel: Key type .fscrypt registered Jan 13 21:37:55.479701 kernel: Key type fscrypt-provisioning registered Jan 13 21:37:55.479707 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:37:55.479712 kernel: ima: No architecture policies found Jan 13 21:37:55.479718 kernel: clk: Disabling unused clocks Jan 13 21:37:55.479725 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 21:37:55.479730 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:37:55.479736 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 21:37:55.479742 kernel: Run /init as init process Jan 13 21:37:55.479747 kernel: with arguments: Jan 13 21:37:55.479753 kernel: /init Jan 13 21:37:55.479759 kernel: with environment: Jan 13 21:37:55.479764 kernel: HOME=/ Jan 13 21:37:55.479770 kernel: TERM=linux Jan 13 21:37:55.479775 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:37:55.479783 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:37:55.479790 systemd[1]: Detected architecture x86-64. Jan 13 21:37:55.479796 systemd[1]: Running in initrd. Jan 13 21:37:55.479802 systemd[1]: No hostname configured, using default hostname. Jan 13 21:37:55.479808 systemd[1]: Hostname set to . Jan 13 21:37:55.479814 systemd[1]: Initializing machine ID from random generator. Jan 13 21:37:55.479821 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:37:55.479827 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:37:55.479833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:37:55.479839 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:37:55.479845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:37:55.479851 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:37:55.479857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:37:55.479863 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:37:55.479871 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:37:55.479877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:37:55.479883 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:37:55.479888 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:37:55.479895 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:37:55.479900 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:37:55.479906 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:37:55.479912 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:37:55.479919 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:37:55.479925 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:37:55.479931 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:37:55.479937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:37:55.479943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:37:55.479949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:37:55.479955 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:37:55.479961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:37:55.479968 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:37:55.479974 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jan 13 21:37:55.479979 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jan 13 21:37:55.479985 kernel: clocksource: Switched to clocksource tsc Jan 13 21:37:55.479991 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:37:55.479997 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:37:55.480053 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:37:55.480084 systemd-journald[268]: Collecting audit messages is disabled. Jan 13 21:37:55.480099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:37:55.480106 systemd-journald[268]: Journal started Jan 13 21:37:55.480119 systemd-journald[268]: Runtime Journal (/run/log/journal/bc6b779d1cb14e8ab89ab75d988b6920) is 8.0M, max 639.9M, 631.9M free. Jan 13 21:37:55.483597 systemd-modules-load[270]: Inserted module 'overlay' Jan 13 21:37:55.500962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:37:55.501009 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:37:55.530034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:37:55.536685 systemd-modules-load[270]: Inserted module 'br_netfilter' Jan 13 21:37:55.542332 kernel: Bridge firewalling registered Jan 13 21:37:55.542400 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:37:55.542600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:37:55.542857 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:37:55.542969 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:37:55.544093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:37:55.544458 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:37:55.544858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:37:55.580289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:37:55.671219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:37:55.699644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:37:55.721559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:37:55.757411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:37:55.760078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:37:55.780880 systemd-resolved[293]: Positive Trust Anchors: Jan 13 21:37:55.780886 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:37:55.780911 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:37:55.782486 systemd-resolved[293]: Defaulting to hostname 'linux'. Jan 13 21:37:55.797626 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:37:55.797724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:37:55.797808 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:37:55.802788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:37:55.807343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:37:55.807973 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:37:55.880511 dracut-cmdline[307]: dracut-dracut-053 Jan 13 21:37:55.888234 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:37:56.057036 kernel: SCSI subsystem initialized Jan 13 21:37:56.068007 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:37:56.081053 kernel: iscsi: registered transport (tcp) Jan 13 21:37:56.102384 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:37:56.102401 kernel: QLogic iSCSI HBA Driver Jan 13 21:37:56.125348 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:37:56.138125 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:37:56.221017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:37:56.221070 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:37:56.226048 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:37:56.272077 kernel: raid6: avx2x4 gen() 52097 MB/s Jan 13 21:37:56.293079 kernel: raid6: avx2x2 gen() 52716 MB/s Jan 13 21:37:56.319147 kernel: raid6: avx2x1 gen() 45229 MB/s Jan 13 21:37:56.319166 kernel: raid6: using algorithm avx2x2 gen() 52716 MB/s Jan 13 21:37:56.346240 kernel: raid6: .... xor() 30926 MB/s, rmw enabled Jan 13 21:37:56.346258 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:37:56.367051 kernel: xor: automatically using best checksumming function avx Jan 13 21:37:56.470058 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:37:56.475338 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:37:56.485355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:37:56.502178 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jan 13 21:37:56.514670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:37:56.546281 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:37:56.585063 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jan 13 21:37:56.632192 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:37:56.652400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:37:56.741494 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:37:56.782709 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:37:56.782724 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 21:37:56.782732 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 21:37:56.782743 kernel: PTP clock support registered Jan 13 21:37:56.782751 kernel: ACPI: bus type USB registered Jan 13 21:37:56.773147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:37:56.868129 kernel: usbcore: registered new interface driver usbfs Jan 13 21:37:56.868147 kernel: usbcore: registered new interface driver hub Jan 13 21:37:56.868157 kernel: usbcore: registered new device driver usb Jan 13 21:37:56.868166 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:37:56.868175 kernel: libata version 3.00 loaded. Jan 13 21:37:56.868189 kernel: AES CTR mode by8 optimization enabled Jan 13 21:37:56.868198 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 21:37:56.967852 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 21:37:56.967971 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 21:37:56.968084 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 21:37:56.968186 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 13 21:37:56.968286 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 21:37:56.968385 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 21:37:56.968487 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 21:37:56.968586 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 21:37:56.968684 kernel: hub 1-0:1.0: USB hub found Jan 13 21:37:56.968791 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 21:37:56.968807 kernel: scsi host0: ahci Jan 13 21:37:56.968902 kernel: scsi host1: ahci Jan 13 21:37:56.968996 kernel: scsi host2: ahci Jan 13 21:37:56.969096 kernel: scsi host3: ahci Jan 13 21:37:56.969187 kernel: scsi host4: ahci Jan 13 21:37:56.969277 kernel: scsi host5: ahci Jan 13 21:37:56.969367 kernel: scsi host6: ahci Jan 13 21:37:56.969459 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 13 21:37:56.969475 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 13 21:37:56.969492 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 13 21:37:56.969506 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 13 21:37:56.969520 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 13 21:37:56.969534 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 13 21:37:56.969548 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 13 21:37:56.969561 kernel: hub 1-0:1.0: 16 ports detected Jan 13 21:37:56.969657 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 21:37:56.969672 kernel: pps pps0: new PPS source ptp0 Jan 13 21:37:56.969768 kernel: hub 2-0:1.0: USB hub found Jan 13 21:37:56.969872 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 13 21:37:57.019784 kernel: hub 2-0:1.0: 10 ports detected Jan 13 21:37:57.020140 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 21:37:57.020283 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e6:30 Jan 13 21:37:57.020424 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 13 21:37:57.020692 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 21:37:56.805725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:37:56.805793 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:37:57.109253 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 13 21:37:57.517406 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 21:37:57.517601 kernel: pps pps1: new PPS source ptp1 Jan 13 21:37:57.517774 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 13 21:37:57.517941 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 21:37:57.518122 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e6:31 Jan 13 21:37:57.518281 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 13 21:37:57.518514 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 21:37:57.518917 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 21:37:57.519155 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519173 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519186 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519200 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519213 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 21:37:57.519226 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 21:37:57.519239 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519253 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 21:37:57.519267 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 21:37:57.519284 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 21:37:57.519298 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 21:37:57.519310 kernel: ata1.00: Features: NCQ-prio Jan 13 21:37:57.519323 kernel: ata2.00: Features: NCQ-prio Jan 13 21:37:57.519337 kernel: ata1.00: configured for UDMA/133 Jan 13 21:37:57.519350 kernel: ata2.00: configured for UDMA/133 Jan 13 21:37:57.519363 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 21:37:57.520650 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 21:37:57.520724 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 21:37:57.520792 kernel: hub 1-14:1.0: USB hub found Jan 13 21:37:57.520866 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 13 21:37:57.520929 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 13 21:37:57.520991 kernel: hub 1-14:1.0: 4 ports detected Jan 13 21:37:57.521070 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 21:37:57.521079 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:57.521089 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 21:37:57.521150 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 21:37:57.521210 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 13 21:37:57.521271 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 13 21:37:57.521330 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 13 21:37:57.521388 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 13 21:37:57.521445 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 21:37:57.521504 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:37:57.521562 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 21:37:57.521619 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 21:37:57.521676 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 21:37:57.521732 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:37:57.521790 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 21:37:57.521798 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 21:37:57.521855 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 13 21:37:57.521914 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:57.521923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:37:57.521930 kernel: GPT:9289727 != 937703087 Jan 13 21:37:57.521938 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:37:57.521945 kernel: GPT:9289727 != 937703087 Jan 13 21:37:57.521952 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:37:57.521959 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:37:57.521966 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 21:37:57.522037 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 21:37:57.522097 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 13 21:37:58.045326 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 21:37:58.045989 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (568) Jan 13 21:37:58.046090 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (546) Jan 13 21:37:58.046164 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 21:37:58.046931 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:58.046980 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:37:58.047058 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:37:58.047103 kernel: usbcore: registered new interface driver usbhid Jan 13 21:37:58.047141 kernel: usbhid: USB HID core driver Jan 13 21:37:58.047178 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 21:37:58.047216 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 21:37:58.047607 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 13 21:37:58.047999 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 21:37:58.048495 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 21:37:58.048591 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 21:37:58.049052 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 21:37:56.925948 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:37:58.074438 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 13 21:37:58.074524 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 13 21:37:57.109113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:37:57.109214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:37:57.130106 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:37:57.146187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:37:57.156284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:37:58.124180 disk-uuid[704]: Primary Header is updated. Jan 13 21:37:58.124180 disk-uuid[704]: Secondary Entries is updated. Jan 13 21:37:58.124180 disk-uuid[704]: Secondary Header is updated. Jan 13 21:37:57.156746 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:37:57.156769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:37:57.156793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:37:57.157168 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:37:57.233319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:37:57.295106 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:37:57.360186 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:37:57.372597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:37:57.561243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 21:37:57.588723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 21:37:57.606646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 21:37:57.620175 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 21:37:57.631080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 21:37:57.656115 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:37:58.681192 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:58.690063 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:37:58.690081 disk-uuid[705]: The operation has completed successfully. Jan 13 21:37:58.729847 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:37:58.729895 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:37:58.766239 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:37:58.791123 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:37:58.791189 sh[734]: Success Jan 13 21:37:58.828201 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:37:58.850132 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:37:58.851386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:37:58.907769 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 21:37:58.907789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:37:58.917404 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:37:58.924428 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:37:58.930283 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:37:58.944009 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:37:58.945538 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:37:58.955443 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:37:58.994781 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:37:58.994793 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:37:58.966149 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:37:59.040056 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:37:59.040070 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:37:59.040079 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:37:59.040087 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:37:59.040139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:37:59.040629 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:37:59.089352 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:37:59.110869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:37:59.124183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:37:59.143164 systemd-networkd[917]: lo: Link UP Jan 13 21:37:59.143167 systemd-networkd[917]: lo: Gained carrier Jan 13 21:37:59.145487 systemd-networkd[917]: Enumeration completed Jan 13 21:37:59.156766 ignition[859]: Ignition 2.20.0 Jan 13 21:37:59.146151 systemd-networkd[917]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.156770 ignition[859]: Stage: fetch-offline Jan 13 21:37:59.148188 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:37:59.156788 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:37:59.155335 systemd[1]: Reached target network.target - Network. Jan 13 21:37:59.156793 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:37:59.158991 unknown[859]: fetched base config from "system" Jan 13 21:37:59.156844 ignition[859]: parsed url from cmdline: "" Jan 13 21:37:59.158995 unknown[859]: fetched user config from "system" Jan 13 21:37:59.156846 ignition[859]: no config URL provided Jan 13 21:37:59.173048 systemd-networkd[917]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.156849 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:37:59.184375 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:37:59.156872 ignition[859]: parsing config with SHA512: 44a1a4f6dc73ce748b42d291a2101ee2893ed6974b5349b4880c4b1e5fbc55e442c6fe6ca00aaca416f5b9faa845d3175b5909e7a2fff0907e83caf613a04f1a Jan 13 21:37:59.201103 systemd-networkd[917]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.159291 ignition[859]: fetch-offline: fetch-offline passed Jan 13 21:37:59.212504 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:37:59.159294 ignition[859]: POST message to Packet Timeline Jan 13 21:37:59.226113 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:37:59.380248 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 21:37:59.159296 ignition[859]: POST Status error: resource requires networking Jan 13 21:37:59.377274 systemd-networkd[917]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.159336 ignition[859]: Ignition finished successfully Jan 13 21:37:59.232451 ignition[930]: Ignition 2.20.0 Jan 13 21:37:59.232455 ignition[930]: Stage: kargs Jan 13 21:37:59.232550 ignition[930]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:37:59.232555 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:37:59.233043 ignition[930]: kargs: kargs passed Jan 13 21:37:59.233046 ignition[930]: POST message to Packet Timeline Jan 13 21:37:59.233057 ignition[930]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:37:59.233489 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38865->[::1]:53: read: connection refused Jan 13 21:37:59.434353 ignition[930]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 21:37:59.434615 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58892->[::1]:53: read: connection refused Jan 13 21:37:59.574122 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 21:37:59.574686 systemd-networkd[917]: eno1: Link UP Jan 13 21:37:59.574874 systemd-networkd[917]: eno2: Link UP Jan 13 21:37:59.575015 systemd-networkd[917]: enp1s0f0np0: Link UP Jan 13 21:37:59.575176 systemd-networkd[917]: enp1s0f0np0: Gained carrier Jan 13 21:37:59.584242 systemd-networkd[917]: enp1s0f1np1: Link UP Jan 13 21:37:59.628250 systemd-networkd[917]: enp1s0f0np0: DHCPv4 address 86.109.11.45/31, gateway 86.109.11.44 acquired from 145.40.83.140 Jan 13 21:37:59.835082 ignition[930]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 21:37:59.836085 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56419->[::1]:53: read: connection refused Jan 13 21:38:00.417823 systemd-networkd[917]: enp1s0f1np1: Gained carrier Jan 13 21:38:00.609610 systemd-networkd[917]: enp1s0f0np0: Gained IPv6LL Jan 13 21:38:00.636585 ignition[930]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 21:38:00.637683 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44008->[::1]:53: read: connection refused Jan 13 21:38:01.633630 systemd-networkd[917]: enp1s0f1np1: Gained IPv6LL Jan 13 21:38:02.239133 ignition[930]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 21:38:02.240351 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48521->[::1]:53: read: connection refused Jan 13 21:38:05.442730 ignition[930]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 21:38:06.358766 ignition[930]: GET result: OK Jan 13 21:38:07.228887 ignition[930]: Ignition finished successfully Jan 13 21:38:07.233962 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:38:07.261247 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:38:07.267518 ignition[950]: Ignition 2.20.0 Jan 13 21:38:07.267522 ignition[950]: Stage: disks Jan 13 21:38:07.267625 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:07.267631 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:07.268132 ignition[950]: disks: disks passed Jan 13 21:38:07.268135 ignition[950]: POST message to Packet Timeline Jan 13 21:38:07.268146 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:08.139552 ignition[950]: GET result: OK Jan 13 21:38:08.464487 ignition[950]: Ignition finished successfully Jan 13 21:38:08.466631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:38:08.483314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:38:08.490491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:38:08.508498 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:38:08.539336 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:38:08.557340 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:38:08.585297 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:38:08.616168 systemd-fsck[968]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:38:08.627567 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:38:08.636182 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:38:08.719061 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 21:38:08.719203 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:38:08.719525 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:38:08.743144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:38:08.777047 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (977) Jan 13 21:38:08.777067 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:38:08.794275 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:38:08.800169 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:38:08.801133 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:38:08.827240 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:38:08.827251 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:38:08.801884 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 21:38:08.827740 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 21:38:08.847325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:38:08.847348 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:38:08.894997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:38:08.914359 coreos-metadata[992]: Jan 13 21:38:08.906 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 21:38:08.914271 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:38:08.956085 coreos-metadata[995]: Jan 13 21:38:08.906 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 21:38:08.938234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:38:08.978190 initrd-setup-root[1009]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:38:08.988132 initrd-setup-root[1016]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:38:08.999134 initrd-setup-root[1023]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:38:09.009112 initrd-setup-root[1030]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:38:09.026506 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:38:09.050238 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:38:09.075201 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:38:09.050958 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:38:09.075952 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:38:09.100133 ignition[1101]: INFO : Ignition 2.20.0 Jan 13 21:38:09.100133 ignition[1101]: INFO : Stage: mount Jan 13 21:38:09.100133 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:09.100133 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:09.100133 ignition[1101]: INFO : mount: mount passed Jan 13 21:38:09.100133 ignition[1101]: INFO : POST message to Packet Timeline Jan 13 21:38:09.100133 ignition[1101]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:09.101219 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:38:09.618079 coreos-metadata[995]: Jan 13 21:38:09.617 INFO Fetch successful Jan 13 21:38:09.653542 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 21:38:09.653596 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 21:38:09.688103 coreos-metadata[992]: Jan 13 21:38:09.678 INFO Fetch successful Jan 13 21:38:09.710517 coreos-metadata[992]: Jan 13 21:38:09.710 INFO wrote hostname ci-4152.2.0-a-ed112912ac to /sysroot/etc/hostname Jan 13 21:38:09.712061 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 21:38:10.059106 ignition[1101]: INFO : GET result: OK Jan 13 21:38:10.437127 ignition[1101]: INFO : Ignition finished successfully Jan 13 21:38:10.440043 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:38:10.475197 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:38:10.478850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:38:10.528009 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1126) Jan 13 21:38:10.545429 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:38:10.545445 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:38:10.551322 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:38:10.566101 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:38:10.566116 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:38:10.568057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:38:10.591312 ignition[1143]: INFO : Ignition 2.20.0 Jan 13 21:38:10.591312 ignition[1143]: INFO : Stage: files Jan 13 21:38:10.605225 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:10.605225 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:10.605225 ignition[1143]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:38:10.605225 ignition[1143]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:38:10.595661 unknown[1143]: wrote ssh authorized keys file for user: core Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:10.987329 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:38:11.217841 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:38:11.378280 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:11.378280 ignition[1143]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: files passed Jan 13 21:38:11.409227 ignition[1143]: INFO : POST message to Packet Timeline Jan 13 21:38:11.409227 ignition[1143]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:12.343628 ignition[1143]: INFO : GET result: OK Jan 13 21:38:12.635189 ignition[1143]: INFO : Ignition finished successfully Jan 13 21:38:12.636679 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:38:12.671257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:38:12.671727 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:38:12.689483 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:38:12.689543 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:38:12.732562 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:38:12.749595 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:38:12.781270 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:38:12.781270 initrd-setup-root-after-ignition[1180]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:38:12.795290 initrd-setup-root-after-ignition[1185]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:38:12.783106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:38:12.863239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:38:12.863299 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:38:12.881514 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:38:12.892302 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:38:12.919476 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:38:12.937414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:38:13.004886 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:38:13.019173 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:38:13.052556 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:38:13.052790 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:38:13.073738 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:38:13.102640 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:38:13.103069 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:38:13.129760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:38:13.151655 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:38:13.170647 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:38:13.188643 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:38:13.209645 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:38:13.230666 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:38:13.250698 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:38:13.271690 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:38:13.293726 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:38:13.313641 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:38:13.331539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:38:13.331942 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:38:13.356758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:38:13.376666 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:38:13.397519 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:38:13.397989 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:38:13.420538 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:38:13.420937 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:38:13.451623 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:38:13.452092 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:38:13.460978 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:38:13.478634 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:38:13.479069 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:38:13.506680 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:38:13.524651 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:38:13.542626 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:38:13.542928 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:38:13.562674 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:38:13.562973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:38:13.585721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:38:13.697166 ignition[1205]: INFO : Ignition 2.20.0 Jan 13 21:38:13.697166 ignition[1205]: INFO : Stage: umount Jan 13 21:38:13.697166 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:13.697166 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:13.697166 ignition[1205]: INFO : umount: umount passed Jan 13 21:38:13.697166 ignition[1205]: INFO : POST message to Packet Timeline Jan 13 21:38:13.697166 ignition[1205]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:13.586150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:38:13.605733 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:38:13.606134 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:38:13.623723 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 21:38:13.624138 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 21:38:13.653268 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:38:13.668128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:38:13.668255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:38:13.684229 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:38:13.706162 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:38:13.706388 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:38:13.724550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:38:13.724912 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:38:13.769271 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:38:13.773844 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:38:13.774126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:38:13.832127 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:38:13.832175 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:38:14.544587 ignition[1205]: INFO : GET result: OK Jan 13 21:38:14.926803 ignition[1205]: INFO : Ignition finished successfully Jan 13 21:38:14.930597 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:38:14.930880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:38:14.947807 systemd[1]: Stopped target network.target - Network. Jan 13 21:38:14.955480 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:38:14.955630 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:38:14.970546 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:38:14.970684 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:38:14.986558 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:38:14.986694 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:38:15.014437 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:38:15.014603 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:38:15.032435 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:38:15.032601 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:38:15.040997 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:38:15.050145 systemd-networkd[917]: enp1s0f1np1: DHCPv6 lease lost Jan 13 21:38:15.056253 systemd-networkd[917]: enp1s0f0np0: DHCPv6 lease lost Jan 13 21:38:15.057777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:38:15.087167 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:38:15.087445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:38:15.107274 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:38:15.107661 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:38:15.117761 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:38:15.117876 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:38:15.147296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:38:15.161286 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:38:15.161319 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:38:15.189378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:38:15.189462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:38:15.208395 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:38:15.208535 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:38:15.226420 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:38:15.226585 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:38:15.247655 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:38:15.269868 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:38:15.270272 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:38:15.275767 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:38:15.275911 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:38:15.301438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:38:15.301549 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:38:15.322364 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:38:15.322495 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:38:15.359171 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:38:15.359443 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:38:15.397171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:38:15.397409 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:38:15.656186 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Jan 13 21:38:15.447120 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:38:15.456201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:38:15.456231 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:38:15.475235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:38:15.475270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:38:15.504828 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:38:15.504959 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:38:15.527797 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:38:15.528037 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:38:15.550499 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:38:15.572564 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:38:15.596988 systemd[1]: Switching root. Jan 13 21:38:15.772059 systemd-journald[268]: Journal stopped Jan 13 21:37:55.466396 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 13 21:37:55.466410 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 21:37:55.466417 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:37:55.466422 kernel: BIOS-provided physical RAM map: Jan 13 21:37:55.466426 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 21:37:55.466430 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 21:37:55.466435 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 21:37:55.466439 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 21:37:55.466443 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 21:37:55.466447 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Jan 13 21:37:55.466451 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Jan 13 21:37:55.466456 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Jan 13 21:37:55.466460 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Jan 13 21:37:55.466464 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 13 21:37:55.466470 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 13 21:37:55.466474 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 13 21:37:55.466480 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 13 21:37:55.466484 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 13 21:37:55.466489 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 13 21:37:55.466493 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 21:37:55.466498 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 21:37:55.466502 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 21:37:55.466507 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 21:37:55.466511 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 21:37:55.466516 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 13 21:37:55.466521 kernel: NX (Execute Disable) protection: active Jan 13 21:37:55.466525 kernel: APIC: Static calls initialized Jan 13 21:37:55.466530 kernel: SMBIOS 3.2.1 present. Jan 13 21:37:55.466535 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 13 21:37:55.466540 kernel: tsc: Detected 3400.000 MHz processor Jan 13 21:37:55.466544 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 21:37:55.466549 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:37:55.466554 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:37:55.466559 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 13 21:37:55.466564 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 21:37:55.466568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:37:55.466573 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 13 21:37:55.466579 kernel: Using GB pages for direct mapping Jan 13 21:37:55.466584 kernel: ACPI: Early table checksum verification disabled Jan 13 21:37:55.466589 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 21:37:55.466595 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 21:37:55.466600 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 13 21:37:55.466605 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 21:37:55.466610 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 13 21:37:55.466616 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 13 21:37:55.466621 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 13 21:37:55.466626 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 21:37:55.466631 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 21:37:55.466636 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 21:37:55.466641 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 21:37:55.466646 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 21:37:55.466652 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 21:37:55.466657 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466662 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 21:37:55.466667 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 21:37:55.466672 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466677 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466682 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 21:37:55.466687 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 21:37:55.466692 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466698 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 21:37:55.466703 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 21:37:55.466708 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 13 21:37:55.466713 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 21:37:55.466718 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 21:37:55.466723 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 21:37:55.466728 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 13 21:37:55.466732 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 21:37:55.466738 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 21:37:55.466743 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 21:37:55.466748 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 21:37:55.466753 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 21:37:55.466758 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 13 21:37:55.466763 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 13 21:37:55.466768 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 13 21:37:55.466773 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 13 21:37:55.466778 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 13 21:37:55.466784 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 13 21:37:55.466789 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 13 21:37:55.466794 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 13 21:37:55.466799 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 13 21:37:55.466804 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 13 21:37:55.466809 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 13 21:37:55.466814 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 13 21:37:55.466819 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 13 21:37:55.466823 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 13 21:37:55.466829 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 13 21:37:55.466834 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 13 21:37:55.466839 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 13 21:37:55.466844 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 13 21:37:55.466849 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 13 21:37:55.466854 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 13 21:37:55.466859 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 13 21:37:55.466864 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 13 21:37:55.466869 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 13 21:37:55.466874 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 13 21:37:55.466880 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 13 21:37:55.466885 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 13 21:37:55.466889 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 13 21:37:55.466894 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 13 21:37:55.466899 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 13 21:37:55.466904 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 13 21:37:55.466909 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 13 21:37:55.466914 kernel: No NUMA configuration found Jan 13 21:37:55.466919 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 13 21:37:55.466925 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 13 21:37:55.466930 kernel: Zone ranges: Jan 13 21:37:55.466935 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:37:55.466940 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:37:55.466945 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 13 21:37:55.466950 kernel: Movable zone start for each node Jan 13 21:37:55.466955 kernel: Early memory node ranges Jan 13 21:37:55.466960 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 21:37:55.466964 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 21:37:55.466970 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Jan 13 21:37:55.466975 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Jan 13 21:37:55.466980 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 13 21:37:55.466985 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 13 21:37:55.466994 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 13 21:37:55.467000 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 13 21:37:55.467007 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:37:55.467012 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 21:37:55.467037 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 21:37:55.467042 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 21:37:55.467048 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 13 21:37:55.467053 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 13 21:37:55.467072 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 13 21:37:55.467077 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 13 21:37:55.467083 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 21:37:55.467088 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 21:37:55.467093 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 21:37:55.467099 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 21:37:55.467105 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 21:37:55.467110 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 21:37:55.467115 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 21:37:55.467120 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 21:37:55.467126 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 21:37:55.467131 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 21:37:55.467136 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 21:37:55.467141 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 21:37:55.467146 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 21:37:55.467153 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 21:37:55.467158 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 21:37:55.467163 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 21:37:55.467169 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 21:37:55.467174 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 21:37:55.467179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:37:55.467185 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:37:55.467190 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:37:55.467195 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:37:55.467202 kernel: TSC deadline timer available Jan 13 21:37:55.467207 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 21:37:55.467212 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 13 21:37:55.467218 kernel: Booting paravirtualized kernel on bare hardware Jan 13 21:37:55.467223 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:37:55.467228 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 21:37:55.467234 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 21:37:55.467239 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 21:37:55.467244 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 21:37:55.467251 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:37:55.467257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:37:55.467262 kernel: random: crng init done Jan 13 21:37:55.467267 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 21:37:55.467273 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 21:37:55.467278 kernel: Fallback order for Node 0: 0 Jan 13 21:37:55.467283 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 13 21:37:55.467289 kernel: Policy zone: Normal Jan 13 21:37:55.467295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:37:55.467300 kernel: software IO TLB: area num 16. Jan 13 21:37:55.467306 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 732412K reserved, 0K cma-reserved) Jan 13 21:37:55.467311 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 21:37:55.467316 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 21:37:55.467322 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:37:55.467327 kernel: Dynamic Preempt: voluntary Jan 13 21:37:55.467332 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:37:55.467339 kernel: rcu: RCU event tracing is enabled. Jan 13 21:37:55.467345 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 21:37:55.467350 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:37:55.467355 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:37:55.467361 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:37:55.467366 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:37:55.467371 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 21:37:55.467377 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 21:37:55.467382 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:37:55.467387 kernel: Console: colour VGA+ 80x25 Jan 13 21:37:55.467394 kernel: printk: console [tty0] enabled Jan 13 21:37:55.467399 kernel: printk: console [ttyS1] enabled Jan 13 21:37:55.467404 kernel: ACPI: Core revision 20230628 Jan 13 21:37:55.467410 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 13 21:37:55.467415 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:37:55.467420 kernel: DMAR: Host address width 39 Jan 13 21:37:55.467426 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 21:37:55.467431 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 21:37:55.467436 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 13 21:37:55.467442 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 13 21:37:55.467448 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 21:37:55.467453 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 21:37:55.467458 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 21:37:55.467464 kernel: x2apic enabled Jan 13 21:37:55.467469 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 21:37:55.467474 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 21:37:55.467480 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 21:37:55.467485 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 21:37:55.467491 kernel: process: using mwait in idle threads Jan 13 21:37:55.467497 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:37:55.467502 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:37:55.467507 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:37:55.467512 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:37:55.467517 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:37:55.467523 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 21:37:55.467528 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:37:55.467533 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 21:37:55.467538 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 21:37:55.467544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:37:55.467550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:37:55.467555 kernel: TAA: Mitigation: TSX disabled Jan 13 21:37:55.467561 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 21:37:55.467566 kernel: SRBDS: Mitigation: Microcode Jan 13 21:37:55.467571 kernel: GDS: Mitigation: Microcode Jan 13 21:37:55.467576 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:37:55.467582 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:37:55.467587 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:37:55.467592 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 21:37:55.467598 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 21:37:55.467603 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:37:55.467609 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 21:37:55.467614 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 21:37:55.467620 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 21:37:55.467625 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:37:55.467630 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:37:55.467635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:37:55.467641 kernel: landlock: Up and running. Jan 13 21:37:55.467646 kernel: SELinux: Initializing. Jan 13 21:37:55.467651 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.467656 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.467662 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 21:37:55.467668 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:37:55.467674 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:37:55.467679 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:37:55.467684 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 21:37:55.467690 kernel: ... version: 4 Jan 13 21:37:55.467695 kernel: ... bit width: 48 Jan 13 21:37:55.467700 kernel: ... generic registers: 4 Jan 13 21:37:55.467706 kernel: ... value mask: 0000ffffffffffff Jan 13 21:37:55.467711 kernel: ... max period: 00007fffffffffff Jan 13 21:37:55.467717 kernel: ... fixed-purpose events: 3 Jan 13 21:37:55.467722 kernel: ... event mask: 000000070000000f Jan 13 21:37:55.467728 kernel: signal: max sigframe size: 2032 Jan 13 21:37:55.467733 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 21:37:55.467738 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:37:55.467744 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:37:55.467749 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 21:37:55.467754 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:37:55.467760 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:37:55.467766 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 21:37:55.467772 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:37:55.467777 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 21:37:55.467782 kernel: smpboot: Max logical packages: 1 Jan 13 21:37:55.467788 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 21:37:55.467793 kernel: devtmpfs: initialized Jan 13 21:37:55.467798 kernel: x86/mm: Memory block size: 128MB Jan 13 21:37:55.467804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Jan 13 21:37:55.467809 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 13 21:37:55.467815 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:37:55.467821 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 21:37:55.467826 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:37:55.467831 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:37:55.467836 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:37:55.467842 kernel: audit: type=2000 audit(1736804270.042:1): state=initialized audit_enabled=0 res=1 Jan 13 21:37:55.467847 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:37:55.467852 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:37:55.467858 kernel: cpuidle: using governor menu Jan 13 21:37:55.467864 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:37:55.467869 kernel: dca service started, version 1.12.1 Jan 13 21:37:55.467874 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 21:37:55.467880 kernel: PCI: Using configuration type 1 for base access Jan 13 21:37:55.467885 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 21:37:55.467890 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:37:55.467896 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:37:55.467901 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:37:55.467906 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:37:55.467912 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:37:55.467918 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:37:55.467923 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:37:55.467928 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:37:55.467934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:37:55.467939 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 21:37:55.467944 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467950 kernel: ACPI: SSDT 0xFFFF8B6381607000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 21:37:55.467955 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467961 kernel: ACPI: SSDT 0xFFFF8B63815F9800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 21:37:55.467967 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467972 kernel: ACPI: SSDT 0xFFFF8B63815E4C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 21:37:55.467977 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467982 kernel: ACPI: SSDT 0xFFFF8B63815FC800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 21:37:55.467987 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.467993 kernel: ACPI: SSDT 0xFFFF8B6381608000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 21:37:55.467998 kernel: ACPI: Dynamic OEM Table Load: Jan 13 21:37:55.468005 kernel: ACPI: SSDT 0xFFFF8B6381E9A400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 21:37:55.468011 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 21:37:55.468016 kernel: ACPI: Interpreter enabled Jan 13 21:37:55.468042 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:37:55.468047 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:37:55.468068 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 21:37:55.468074 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 21:37:55.468079 kernel: HEST: Table parsing has been initialized. Jan 13 21:37:55.468084 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 21:37:55.468089 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:37:55.468096 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:37:55.468101 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 21:37:55.468106 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 21:37:55.468112 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 21:37:55.468117 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 21:37:55.468123 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 21:37:55.468128 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 21:37:55.468133 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 21:37:55.468138 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 21:37:55.468145 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 21:37:55.468150 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 21:37:55.468156 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 21:37:55.468161 kernel: ACPI: \PIN_: New power resource Jan 13 21:37:55.468166 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 21:37:55.468238 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:37:55.468290 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 21:37:55.468337 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 21:37:55.468346 kernel: PCI host bridge to bus 0000:00 Jan 13 21:37:55.468395 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:37:55.468438 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:37:55.468478 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:37:55.468520 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 13 21:37:55.468560 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 21:37:55.468600 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 21:37:55.468657 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 21:37:55.468713 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 21:37:55.468761 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.468811 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 21:37:55.468858 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 13 21:37:55.468907 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 21:37:55.468956 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 13 21:37:55.469009 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 21:37:55.469087 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 13 21:37:55.469133 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 21:37:55.469182 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 21:37:55.469228 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 13 21:37:55.469276 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 13 21:37:55.469325 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 21:37:55.469372 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 21:37:55.469423 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 21:37:55.469469 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 21:37:55.469518 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 21:37:55.469568 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 13 21:37:55.469617 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 21:37:55.469674 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 21:37:55.469725 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 13 21:37:55.469770 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 21:37:55.469821 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 21:37:55.469867 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 13 21:37:55.469917 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 21:37:55.469966 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 21:37:55.470034 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 13 21:37:55.470099 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 13 21:37:55.470145 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 13 21:37:55.470194 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 13 21:37:55.470240 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 13 21:37:55.470288 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 13 21:37:55.470335 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 21:37:55.470386 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 21:37:55.470434 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470491 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 21:37:55.470537 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470588 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 21:37:55.470634 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470685 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 21:37:55.470731 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470784 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 13 21:37:55.470830 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.470881 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 21:37:55.470927 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 21:37:55.470978 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 21:37:55.471052 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 21:37:55.471117 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 13 21:37:55.471163 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 21:37:55.471216 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 21:37:55.471263 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 21:37:55.471318 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 21:37:55.471367 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 21:37:55.471417 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 13 21:37:55.471465 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 13 21:37:55.471513 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 21:37:55.471561 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 21:37:55.471613 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 21:37:55.471661 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 21:37:55.471709 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 13 21:37:55.471759 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 13 21:37:55.471807 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 21:37:55.471855 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 21:37:55.471903 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:37:55.471949 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 21:37:55.471996 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 21:37:55.472080 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 21:37:55.472133 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 13 21:37:55.472185 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 13 21:37:55.472233 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 13 21:37:55.472280 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 21:37:55.472328 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 13 21:37:55.472376 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.472423 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 21:37:55.472470 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 21:37:55.472519 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 21:37:55.472572 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 21:37:55.472620 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 21:37:55.472668 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 13 21:37:55.472718 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 21:37:55.472764 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 13 21:37:55.472812 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 21:37:55.472860 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 21:37:55.472907 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 21:37:55.472954 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 21:37:55.473002 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 21:37:55.473108 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 21:37:55.473156 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 13 21:37:55.473204 kernel: pci 0000:06:00.0: supports D1 D2 Jan 13 21:37:55.473251 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:37:55.473301 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 21:37:55.473347 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.473394 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.473447 kernel: pci_bus 0000:07: extended config space not accessible Jan 13 21:37:55.473502 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 21:37:55.473551 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 13 21:37:55.473604 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 13 21:37:55.473675 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 21:37:55.473738 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:37:55.473788 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 21:37:55.473836 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:37:55.473901 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 21:37:55.473963 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.474029 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.474054 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 21:37:55.474060 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 21:37:55.474065 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 21:37:55.474071 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 21:37:55.474077 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 21:37:55.474082 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 21:37:55.474088 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 21:37:55.474094 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 21:37:55.474099 kernel: iommu: Default domain type: Translated Jan 13 21:37:55.474106 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:37:55.474112 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:37:55.474117 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:37:55.474123 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 21:37:55.474129 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Jan 13 21:37:55.474134 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 13 21:37:55.474140 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 13 21:37:55.474145 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 13 21:37:55.474151 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 13 21:37:55.474203 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 13 21:37:55.474252 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 13 21:37:55.474302 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:37:55.474310 kernel: vgaarb: loaded Jan 13 21:37:55.474316 kernel: clocksource: Switched to clocksource tsc-early Jan 13 21:37:55.474322 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:37:55.474328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:37:55.474333 kernel: pnp: PnP ACPI init Jan 13 21:37:55.474383 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 21:37:55.474432 kernel: pnp 00:02: [dma 0 disabled] Jan 13 21:37:55.474479 kernel: pnp 00:03: [dma 0 disabled] Jan 13 21:37:55.474527 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 21:37:55.474570 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 21:37:55.474615 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 21:37:55.474661 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 21:37:55.474706 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 21:37:55.474749 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 21:37:55.474791 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 21:37:55.474837 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 21:37:55.474879 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 21:37:55.474922 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 21:37:55.474965 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 21:37:55.475078 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 21:37:55.475121 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 21:37:55.475163 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 21:37:55.475204 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 21:37:55.475247 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 21:37:55.475290 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 21:37:55.475332 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 21:37:55.475381 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 21:37:55.475390 kernel: pnp: PnP ACPI: found 10 devices Jan 13 21:37:55.475396 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:37:55.475401 kernel: NET: Registered PF_INET protocol family Jan 13 21:37:55.475407 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:37:55.475413 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 21:37:55.475419 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:37:55.475424 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:37:55.475432 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:37:55.475437 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 21:37:55.475443 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.475449 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:37:55.475455 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:37:55.475461 kernel: NET: Registered PF_XDP protocol family Jan 13 21:37:55.475509 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 13 21:37:55.475558 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 13 21:37:55.475607 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 13 21:37:55.475658 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475706 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475755 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475801 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 21:37:55.475849 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 21:37:55.475895 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 21:37:55.475941 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 21:37:55.475991 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 21:37:55.476076 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 21:37:55.476123 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 21:37:55.476169 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 21:37:55.476216 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 21:37:55.476265 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 21:37:55.476312 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 21:37:55.476374 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 21:37:55.476436 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 21:37:55.476483 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.476530 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.476576 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 21:37:55.476623 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 21:37:55.476670 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 21:37:55.476715 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 21:37:55.476757 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:37:55.476798 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:37:55.476840 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:37:55.476880 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 13 21:37:55.476921 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 21:37:55.476968 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 13 21:37:55.477016 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 21:37:55.477098 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 13 21:37:55.477141 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 13 21:37:55.477187 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 21:37:55.477230 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 13 21:37:55.477276 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 13 21:37:55.477322 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 13 21:37:55.477367 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 21:37:55.477412 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 13 21:37:55.477420 kernel: PCI: CLS 64 bytes, default 64 Jan 13 21:37:55.477426 kernel: DMAR: No ATSR found Jan 13 21:37:55.477432 kernel: DMAR: No SATC found Jan 13 21:37:55.477438 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 21:37:55.477483 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 13 21:37:55.477532 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 13 21:37:55.477579 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 13 21:37:55.477625 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 13 21:37:55.477672 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 13 21:37:55.477719 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 13 21:37:55.477795 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 13 21:37:55.477840 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 13 21:37:55.477887 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 13 21:37:55.477933 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 13 21:37:55.477983 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 13 21:37:55.478061 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 13 21:37:55.478108 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 13 21:37:55.478153 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 13 21:37:55.478200 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 13 21:37:55.478246 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 13 21:37:55.478292 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 13 21:37:55.478337 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 13 21:37:55.478386 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 13 21:37:55.478435 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 13 21:37:55.478480 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 13 21:37:55.478528 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 13 21:37:55.478576 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 13 21:37:55.478624 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 13 21:37:55.478671 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 21:37:55.478719 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 13 21:37:55.478770 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 13 21:37:55.478778 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 21:37:55.478784 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:37:55.478790 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 13 21:37:55.478796 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 13 21:37:55.478801 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 21:37:55.478807 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 21:37:55.478813 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 21:37:55.478863 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 21:37:55.478873 kernel: Initialise system trusted keyrings Jan 13 21:37:55.478878 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 21:37:55.478884 kernel: Key type asymmetric registered Jan 13 21:37:55.478890 kernel: Asymmetric key parser 'x509' registered Jan 13 21:37:55.478895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:37:55.478901 kernel: io scheduler mq-deadline registered Jan 13 21:37:55.478907 kernel: io scheduler kyber registered Jan 13 21:37:55.478912 kernel: io scheduler bfq registered Jan 13 21:37:55.478960 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 13 21:37:55.479009 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 13 21:37:55.479094 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 13 21:37:55.479158 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 13 21:37:55.479218 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 13 21:37:55.479264 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 13 21:37:55.479316 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 21:37:55.479326 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 21:37:55.479332 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 21:37:55.479339 kernel: pstore: Using crash dump compression: deflate Jan 13 21:37:55.479344 kernel: pstore: Registered erst as persistent store backend Jan 13 21:37:55.479350 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:37:55.479356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:37:55.479362 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:37:55.479367 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:37:55.479373 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 13 21:37:55.479424 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 21:37:55.479433 kernel: i8042: PNP: No PS/2 controller found. Jan 13 21:37:55.479476 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 21:37:55.479519 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 21:37:55.479562 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T21:37:54 UTC (1736804274) Jan 13 21:37:55.479605 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 21:37:55.479613 kernel: intel_pstate: Intel P-state driver initializing Jan 13 21:37:55.479619 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 21:37:55.479626 kernel: intel_pstate: HWP enabled Jan 13 21:37:55.479632 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:37:55.479638 kernel: Segment Routing with IPv6 Jan 13 21:37:55.479643 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:37:55.479649 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:37:55.479655 kernel: Key type dns_resolver registered Jan 13 21:37:55.479660 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 21:37:55.479666 kernel: IPI shorthand broadcast: enabled Jan 13 21:37:55.479672 kernel: sched_clock: Marking stable (2491000587, 1449300293)->(4499074336, -558773456) Jan 13 21:37:55.479678 kernel: registered taskstats version 1 Jan 13 21:37:55.479684 kernel: Loading compiled-in X.509 certificates Jan 13 21:37:55.479690 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 21:37:55.479695 kernel: Key type .fscrypt registered Jan 13 21:37:55.479701 kernel: Key type fscrypt-provisioning registered Jan 13 21:37:55.479707 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:37:55.479712 kernel: ima: No architecture policies found Jan 13 21:37:55.479718 kernel: clk: Disabling unused clocks Jan 13 21:37:55.479725 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 21:37:55.479730 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:37:55.479736 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 21:37:55.479742 kernel: Run /init as init process Jan 13 21:37:55.479747 kernel: with arguments: Jan 13 21:37:55.479753 kernel: /init Jan 13 21:37:55.479759 kernel: with environment: Jan 13 21:37:55.479764 kernel: HOME=/ Jan 13 21:37:55.479770 kernel: TERM=linux Jan 13 21:37:55.479775 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:37:55.479783 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:37:55.479790 systemd[1]: Detected architecture x86-64. Jan 13 21:37:55.479796 systemd[1]: Running in initrd. Jan 13 21:37:55.479802 systemd[1]: No hostname configured, using default hostname. Jan 13 21:37:55.479808 systemd[1]: Hostname set to . Jan 13 21:37:55.479814 systemd[1]: Initializing machine ID from random generator. Jan 13 21:37:55.479821 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:37:55.479827 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:37:55.479833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:37:55.479839 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:37:55.479845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:37:55.479851 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:37:55.479857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:37:55.479863 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:37:55.479871 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:37:55.479877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:37:55.479883 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:37:55.479888 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:37:55.479895 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:37:55.479900 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:37:55.479906 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:37:55.479912 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:37:55.479919 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:37:55.479925 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:37:55.479931 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:37:55.479937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:37:55.479943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:37:55.479949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:37:55.479955 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:37:55.479961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:37:55.479968 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:37:55.479974 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jan 13 21:37:55.479979 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jan 13 21:37:55.479985 kernel: clocksource: Switched to clocksource tsc Jan 13 21:37:55.479991 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:37:55.479997 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:37:55.480053 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:37:55.480084 systemd-journald[268]: Collecting audit messages is disabled. Jan 13 21:37:55.480099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:37:55.480106 systemd-journald[268]: Journal started Jan 13 21:37:55.480119 systemd-journald[268]: Runtime Journal (/run/log/journal/bc6b779d1cb14e8ab89ab75d988b6920) is 8.0M, max 639.9M, 631.9M free. Jan 13 21:37:55.483597 systemd-modules-load[270]: Inserted module 'overlay' Jan 13 21:37:55.500962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:37:55.501009 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:37:55.530034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:37:55.536685 systemd-modules-load[270]: Inserted module 'br_netfilter' Jan 13 21:37:55.542332 kernel: Bridge firewalling registered Jan 13 21:37:55.542400 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:37:55.542600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:37:55.542857 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:37:55.542969 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:37:55.544093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:37:55.544458 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:37:55.544858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:37:55.580289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:37:55.671219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:37:55.699644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:37:55.721559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:37:55.757411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:37:55.760078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:37:55.780880 systemd-resolved[293]: Positive Trust Anchors: Jan 13 21:37:55.780886 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:37:55.780911 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:37:55.782486 systemd-resolved[293]: Defaulting to hostname 'linux'. Jan 13 21:37:55.797626 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:37:55.797724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:37:55.797808 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:37:55.802788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:37:55.807343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:37:55.807973 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:37:55.880511 dracut-cmdline[307]: dracut-dracut-053 Jan 13 21:37:55.888234 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:37:56.057036 kernel: SCSI subsystem initialized Jan 13 21:37:56.068007 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:37:56.081053 kernel: iscsi: registered transport (tcp) Jan 13 21:37:56.102384 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:37:56.102401 kernel: QLogic iSCSI HBA Driver Jan 13 21:37:56.125348 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:37:56.138125 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:37:56.221017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:37:56.221070 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:37:56.226048 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:37:56.272077 kernel: raid6: avx2x4 gen() 52097 MB/s Jan 13 21:37:56.293079 kernel: raid6: avx2x2 gen() 52716 MB/s Jan 13 21:37:56.319147 kernel: raid6: avx2x1 gen() 45229 MB/s Jan 13 21:37:56.319166 kernel: raid6: using algorithm avx2x2 gen() 52716 MB/s Jan 13 21:37:56.346240 kernel: raid6: .... xor() 30926 MB/s, rmw enabled Jan 13 21:37:56.346258 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:37:56.367051 kernel: xor: automatically using best checksumming function avx Jan 13 21:37:56.470058 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:37:56.475338 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:37:56.485355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:37:56.502178 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jan 13 21:37:56.514670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:37:56.546281 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:37:56.585063 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jan 13 21:37:56.632192 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:37:56.652400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:37:56.741494 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:37:56.782709 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:37:56.782724 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 21:37:56.782732 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 21:37:56.782743 kernel: PTP clock support registered Jan 13 21:37:56.782751 kernel: ACPI: bus type USB registered Jan 13 21:37:56.773147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:37:56.868129 kernel: usbcore: registered new interface driver usbfs Jan 13 21:37:56.868147 kernel: usbcore: registered new interface driver hub Jan 13 21:37:56.868157 kernel: usbcore: registered new device driver usb Jan 13 21:37:56.868166 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:37:56.868175 kernel: libata version 3.00 loaded. Jan 13 21:37:56.868189 kernel: AES CTR mode by8 optimization enabled Jan 13 21:37:56.868198 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 21:37:56.967852 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 21:37:56.967971 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 21:37:56.968084 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 21:37:56.968186 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 13 21:37:56.968286 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 21:37:56.968385 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 21:37:56.968487 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 21:37:56.968586 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 21:37:56.968684 kernel: hub 1-0:1.0: USB hub found Jan 13 21:37:56.968791 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 21:37:56.968807 kernel: scsi host0: ahci Jan 13 21:37:56.968902 kernel: scsi host1: ahci Jan 13 21:37:56.968996 kernel: scsi host2: ahci Jan 13 21:37:56.969096 kernel: scsi host3: ahci Jan 13 21:37:56.969187 kernel: scsi host4: ahci Jan 13 21:37:56.969277 kernel: scsi host5: ahci Jan 13 21:37:56.969367 kernel: scsi host6: ahci Jan 13 21:37:56.969459 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 13 21:37:56.969475 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 13 21:37:56.969492 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 13 21:37:56.969506 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 13 21:37:56.969520 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 13 21:37:56.969534 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 13 21:37:56.969548 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 13 21:37:56.969561 kernel: hub 1-0:1.0: 16 ports detected Jan 13 21:37:56.969657 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 21:37:56.969672 kernel: pps pps0: new PPS source ptp0 Jan 13 21:37:56.969768 kernel: hub 2-0:1.0: USB hub found Jan 13 21:37:56.969872 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 13 21:37:57.019784 kernel: hub 2-0:1.0: 10 ports detected Jan 13 21:37:57.020140 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 21:37:57.020283 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e6:30 Jan 13 21:37:57.020424 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 13 21:37:57.020692 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 21:37:56.805725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:37:56.805793 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:37:57.109253 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 13 21:37:57.517406 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 21:37:57.517601 kernel: pps pps1: new PPS source ptp1 Jan 13 21:37:57.517774 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 13 21:37:57.517941 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 21:37:57.518122 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e6:31 Jan 13 21:37:57.518281 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 13 21:37:57.518514 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 21:37:57.518917 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 21:37:57.519155 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519173 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519186 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519200 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519213 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 21:37:57.519226 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 21:37:57.519239 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 21:37:57.519253 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 21:37:57.519267 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 21:37:57.519284 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 21:37:57.519298 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 21:37:57.519310 kernel: ata1.00: Features: NCQ-prio Jan 13 21:37:57.519323 kernel: ata2.00: Features: NCQ-prio Jan 13 21:37:57.519337 kernel: ata1.00: configured for UDMA/133 Jan 13 21:37:57.519350 kernel: ata2.00: configured for UDMA/133 Jan 13 21:37:57.519363 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 21:37:57.520650 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 21:37:57.520724 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 21:37:57.520792 kernel: hub 1-14:1.0: USB hub found Jan 13 21:37:57.520866 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 13 21:37:57.520929 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 13 21:37:57.520991 kernel: hub 1-14:1.0: 4 ports detected Jan 13 21:37:57.521070 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 21:37:57.521079 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:57.521089 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 21:37:57.521150 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 21:37:57.521210 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 13 21:37:57.521271 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 13 21:37:57.521330 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 13 21:37:57.521388 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 13 21:37:57.521445 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 21:37:57.521504 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:37:57.521562 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 21:37:57.521619 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 21:37:57.521676 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 21:37:57.521732 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:37:57.521790 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 21:37:57.521798 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 21:37:57.521855 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 13 21:37:57.521914 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:57.521923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:37:57.521930 kernel: GPT:9289727 != 937703087 Jan 13 21:37:57.521938 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:37:57.521945 kernel: GPT:9289727 != 937703087 Jan 13 21:37:57.521952 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:37:57.521959 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:37:57.521966 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 21:37:57.522037 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 21:37:57.522097 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 13 21:37:58.045326 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 21:37:58.045989 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (568) Jan 13 21:37:58.046090 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (546) Jan 13 21:37:58.046164 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 21:37:58.046931 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:58.046980 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:37:58.047058 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:37:58.047103 kernel: usbcore: registered new interface driver usbhid Jan 13 21:37:58.047141 kernel: usbhid: USB HID core driver Jan 13 21:37:58.047178 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 21:37:58.047216 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 21:37:58.047607 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 13 21:37:58.047999 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 21:37:58.048495 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 21:37:58.048591 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 21:37:58.049052 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 21:37:56.925948 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:37:58.074438 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 13 21:37:58.074524 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 13 21:37:57.109113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:37:57.109214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:37:57.130106 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:37:57.146187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:37:57.156284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:37:58.124180 disk-uuid[704]: Primary Header is updated. Jan 13 21:37:58.124180 disk-uuid[704]: Secondary Entries is updated. Jan 13 21:37:58.124180 disk-uuid[704]: Secondary Header is updated. Jan 13 21:37:57.156746 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:37:57.156769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:37:57.156793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:37:57.157168 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:37:57.233319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:37:57.295106 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:37:57.360186 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:37:57.372597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:37:57.561243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 21:37:57.588723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 21:37:57.606646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 21:37:57.620175 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 21:37:57.631080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 21:37:57.656115 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:37:58.681192 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 21:37:58.690063 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:37:58.690081 disk-uuid[705]: The operation has completed successfully. Jan 13 21:37:58.729847 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:37:58.729895 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:37:58.766239 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:37:58.791123 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:37:58.791189 sh[734]: Success Jan 13 21:37:58.828201 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:37:58.850132 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:37:58.851386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:37:58.907769 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 21:37:58.907789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:37:58.917404 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:37:58.924428 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:37:58.930283 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:37:58.944009 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:37:58.945538 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:37:58.955443 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:37:58.994781 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:37:58.994793 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:37:58.966149 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:37:59.040056 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:37:59.040070 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:37:59.040079 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:37:59.040087 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:37:59.040139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:37:59.040629 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:37:59.089352 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:37:59.110869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:37:59.124183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:37:59.143164 systemd-networkd[917]: lo: Link UP Jan 13 21:37:59.143167 systemd-networkd[917]: lo: Gained carrier Jan 13 21:37:59.145487 systemd-networkd[917]: Enumeration completed Jan 13 21:37:59.156766 ignition[859]: Ignition 2.20.0 Jan 13 21:37:59.146151 systemd-networkd[917]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.156770 ignition[859]: Stage: fetch-offline Jan 13 21:37:59.148188 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:37:59.156788 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:37:59.155335 systemd[1]: Reached target network.target - Network. Jan 13 21:37:59.156793 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:37:59.158991 unknown[859]: fetched base config from "system" Jan 13 21:37:59.156844 ignition[859]: parsed url from cmdline: "" Jan 13 21:37:59.158995 unknown[859]: fetched user config from "system" Jan 13 21:37:59.156846 ignition[859]: no config URL provided Jan 13 21:37:59.173048 systemd-networkd[917]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.156849 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:37:59.184375 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:37:59.156872 ignition[859]: parsing config with SHA512: 44a1a4f6dc73ce748b42d291a2101ee2893ed6974b5349b4880c4b1e5fbc55e442c6fe6ca00aaca416f5b9faa845d3175b5909e7a2fff0907e83caf613a04f1a Jan 13 21:37:59.201103 systemd-networkd[917]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.159291 ignition[859]: fetch-offline: fetch-offline passed Jan 13 21:37:59.212504 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:37:59.159294 ignition[859]: POST message to Packet Timeline Jan 13 21:37:59.226113 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:37:59.380248 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 21:37:59.159296 ignition[859]: POST Status error: resource requires networking Jan 13 21:37:59.377274 systemd-networkd[917]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:37:59.159336 ignition[859]: Ignition finished successfully Jan 13 21:37:59.232451 ignition[930]: Ignition 2.20.0 Jan 13 21:37:59.232455 ignition[930]: Stage: kargs Jan 13 21:37:59.232550 ignition[930]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:37:59.232555 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:37:59.233043 ignition[930]: kargs: kargs passed Jan 13 21:37:59.233046 ignition[930]: POST message to Packet Timeline Jan 13 21:37:59.233057 ignition[930]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:37:59.233489 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38865->[::1]:53: read: connection refused Jan 13 21:37:59.434353 ignition[930]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 21:37:59.434615 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58892->[::1]:53: read: connection refused Jan 13 21:37:59.574122 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 21:37:59.574686 systemd-networkd[917]: eno1: Link UP Jan 13 21:37:59.574874 systemd-networkd[917]: eno2: Link UP Jan 13 21:37:59.575015 systemd-networkd[917]: enp1s0f0np0: Link UP Jan 13 21:37:59.575176 systemd-networkd[917]: enp1s0f0np0: Gained carrier Jan 13 21:37:59.584242 systemd-networkd[917]: enp1s0f1np1: Link UP Jan 13 21:37:59.628250 systemd-networkd[917]: enp1s0f0np0: DHCPv4 address 86.109.11.45/31, gateway 86.109.11.44 acquired from 145.40.83.140 Jan 13 21:37:59.835082 ignition[930]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 21:37:59.836085 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56419->[::1]:53: read: connection refused Jan 13 21:38:00.417823 systemd-networkd[917]: enp1s0f1np1: Gained carrier Jan 13 21:38:00.609610 systemd-networkd[917]: enp1s0f0np0: Gained IPv6LL Jan 13 21:38:00.636585 ignition[930]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 21:38:00.637683 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44008->[::1]:53: read: connection refused Jan 13 21:38:01.633630 systemd-networkd[917]: enp1s0f1np1: Gained IPv6LL Jan 13 21:38:02.239133 ignition[930]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 21:38:02.240351 ignition[930]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48521->[::1]:53: read: connection refused Jan 13 21:38:05.442730 ignition[930]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 21:38:06.358766 ignition[930]: GET result: OK Jan 13 21:38:07.228887 ignition[930]: Ignition finished successfully Jan 13 21:38:07.233962 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:38:07.261247 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:38:07.267518 ignition[950]: Ignition 2.20.0 Jan 13 21:38:07.267522 ignition[950]: Stage: disks Jan 13 21:38:07.267625 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:07.267631 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:07.268132 ignition[950]: disks: disks passed Jan 13 21:38:07.268135 ignition[950]: POST message to Packet Timeline Jan 13 21:38:07.268146 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:08.139552 ignition[950]: GET result: OK Jan 13 21:38:08.464487 ignition[950]: Ignition finished successfully Jan 13 21:38:08.466631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:38:08.483314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:38:08.490491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:38:08.508498 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:38:08.539336 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:38:08.557340 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:38:08.585297 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:38:08.616168 systemd-fsck[968]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:38:08.627567 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:38:08.636182 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:38:08.719061 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 21:38:08.719203 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:38:08.719525 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:38:08.743144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:38:08.777047 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (977) Jan 13 21:38:08.777067 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:38:08.794275 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:38:08.800169 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:38:08.801133 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:38:08.827240 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:38:08.827251 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:38:08.801884 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 21:38:08.827740 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 21:38:08.847325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:38:08.847348 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:38:08.894997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:38:08.914359 coreos-metadata[992]: Jan 13 21:38:08.906 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 21:38:08.914271 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:38:08.956085 coreos-metadata[995]: Jan 13 21:38:08.906 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 21:38:08.938234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:38:08.978190 initrd-setup-root[1009]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:38:08.988132 initrd-setup-root[1016]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:38:08.999134 initrd-setup-root[1023]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:38:09.009112 initrd-setup-root[1030]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:38:09.026506 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:38:09.050238 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:38:09.075201 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:38:09.050958 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:38:09.075952 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:38:09.100133 ignition[1101]: INFO : Ignition 2.20.0 Jan 13 21:38:09.100133 ignition[1101]: INFO : Stage: mount Jan 13 21:38:09.100133 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:09.100133 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:09.100133 ignition[1101]: INFO : mount: mount passed Jan 13 21:38:09.100133 ignition[1101]: INFO : POST message to Packet Timeline Jan 13 21:38:09.100133 ignition[1101]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:09.101219 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:38:09.618079 coreos-metadata[995]: Jan 13 21:38:09.617 INFO Fetch successful Jan 13 21:38:09.653542 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 21:38:09.653596 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 21:38:09.688103 coreos-metadata[992]: Jan 13 21:38:09.678 INFO Fetch successful Jan 13 21:38:09.710517 coreos-metadata[992]: Jan 13 21:38:09.710 INFO wrote hostname ci-4152.2.0-a-ed112912ac to /sysroot/etc/hostname Jan 13 21:38:09.712061 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 21:38:10.059106 ignition[1101]: INFO : GET result: OK Jan 13 21:38:10.437127 ignition[1101]: INFO : Ignition finished successfully Jan 13 21:38:10.440043 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:38:10.475197 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:38:10.478850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:38:10.528009 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1126) Jan 13 21:38:10.545429 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:38:10.545445 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:38:10.551322 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:38:10.566101 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:38:10.566116 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:38:10.568057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:38:10.591312 ignition[1143]: INFO : Ignition 2.20.0 Jan 13 21:38:10.591312 ignition[1143]: INFO : Stage: files Jan 13 21:38:10.605225 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:10.605225 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:10.605225 ignition[1143]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:38:10.605225 ignition[1143]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:38:10.605225 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:38:10.595661 unknown[1143]: wrote ssh authorized keys file for user: core Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:10.737325 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:10.987329 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:38:11.217841 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:38:11.378280 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:38:11.378280 ignition[1143]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:38:11.409227 ignition[1143]: INFO : files: files passed Jan 13 21:38:11.409227 ignition[1143]: INFO : POST message to Packet Timeline Jan 13 21:38:11.409227 ignition[1143]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:12.343628 ignition[1143]: INFO : GET result: OK Jan 13 21:38:12.635189 ignition[1143]: INFO : Ignition finished successfully Jan 13 21:38:12.636679 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:38:12.671257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:38:12.671727 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:38:12.689483 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:38:12.689543 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:38:12.732562 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:38:12.749595 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:38:12.781270 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:38:12.781270 initrd-setup-root-after-ignition[1180]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:38:12.795290 initrd-setup-root-after-ignition[1185]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:38:12.783106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:38:12.863239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:38:12.863299 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:38:12.881514 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:38:12.892302 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:38:12.919476 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:38:12.937414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:38:13.004886 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:38:13.019173 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:38:13.052556 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:38:13.052790 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:38:13.073738 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:38:13.102640 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:38:13.103069 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:38:13.129760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:38:13.151655 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:38:13.170647 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:38:13.188643 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:38:13.209645 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:38:13.230666 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:38:13.250698 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:38:13.271690 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:38:13.293726 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:38:13.313641 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:38:13.331539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:38:13.331942 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:38:13.356758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:38:13.376666 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:38:13.397519 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:38:13.397989 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:38:13.420538 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:38:13.420937 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:38:13.451623 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:38:13.452092 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:38:13.460978 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:38:13.478634 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:38:13.479069 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:38:13.506680 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:38:13.524651 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:38:13.542626 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:38:13.542928 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:38:13.562674 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:38:13.562973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:38:13.585721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:38:13.697166 ignition[1205]: INFO : Ignition 2.20.0 Jan 13 21:38:13.697166 ignition[1205]: INFO : Stage: umount Jan 13 21:38:13.697166 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:38:13.697166 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 21:38:13.697166 ignition[1205]: INFO : umount: umount passed Jan 13 21:38:13.697166 ignition[1205]: INFO : POST message to Packet Timeline Jan 13 21:38:13.697166 ignition[1205]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 21:38:13.586150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:38:13.605733 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:38:13.606134 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:38:13.623723 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 21:38:13.624138 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 21:38:13.653268 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:38:13.668128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:38:13.668255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:38:13.684229 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:38:13.706162 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:38:13.706388 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:38:13.724550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:38:13.724912 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:38:13.769271 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:38:13.773844 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:38:13.774126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:38:13.832127 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:38:13.832175 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:38:14.544587 ignition[1205]: INFO : GET result: OK Jan 13 21:38:14.926803 ignition[1205]: INFO : Ignition finished successfully Jan 13 21:38:14.930597 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:38:14.930880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:38:14.947807 systemd[1]: Stopped target network.target - Network. Jan 13 21:38:14.955480 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:38:14.955630 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:38:14.970546 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:38:14.970684 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:38:14.986558 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:38:14.986694 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:38:15.014437 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:38:15.014603 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:38:15.032435 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:38:15.032601 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:38:15.040997 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:38:15.050145 systemd-networkd[917]: enp1s0f1np1: DHCPv6 lease lost Jan 13 21:38:15.056253 systemd-networkd[917]: enp1s0f0np0: DHCPv6 lease lost Jan 13 21:38:15.057777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:38:15.087167 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:38:15.087445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:38:15.107274 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:38:15.107661 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:38:15.117761 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:38:15.117876 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:38:15.147296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:38:15.161286 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:38:15.161319 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:38:15.189378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:38:15.189462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:38:15.208395 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:38:15.208535 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:38:15.226420 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:38:15.226585 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:38:15.247655 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:38:15.269868 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:38:15.270272 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:38:15.275767 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:38:15.275911 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:38:15.301438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:38:15.301549 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:38:15.322364 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:38:15.322495 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:38:15.359171 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:38:15.359443 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:38:15.397171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:38:15.397409 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:38:15.656186 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Jan 13 21:38:15.447120 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:38:15.456201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:38:15.456231 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:38:15.475235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:38:15.475270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:38:15.504828 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:38:15.504959 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:38:15.527797 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:38:15.528037 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:38:15.550499 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:38:15.572564 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:38:15.596988 systemd[1]: Switching root. Jan 13 21:38:15.772059 systemd-journald[268]: Journal stopped Jan 13 21:38:17.390827 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:38:17.390842 kernel: SELinux: policy capability open_perms=1 Jan 13 21:38:17.390848 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:38:17.390855 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:38:17.390860 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:38:17.390865 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:38:17.390871 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:38:17.390877 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:38:17.390882 kernel: audit: type=1403 audit(1736804295.881:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:38:17.390888 systemd[1]: Successfully loaded SELinux policy in 74.126ms. Jan 13 21:38:17.390896 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.087ms. Jan 13 21:38:17.390903 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:38:17.390909 systemd[1]: Detected architecture x86-64. Jan 13 21:38:17.390915 systemd[1]: Detected first boot. Jan 13 21:38:17.390922 systemd[1]: Hostname set to . Jan 13 21:38:17.390929 systemd[1]: Initializing machine ID from random generator. Jan 13 21:38:17.390936 zram_generator::config[1255]: No configuration found. Jan 13 21:38:17.390942 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:38:17.390948 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:38:17.390954 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:38:17.390960 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:38:17.390967 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:38:17.390974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:38:17.390980 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:38:17.390987 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:38:17.390993 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:38:17.391000 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:38:17.391009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:38:17.391015 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:38:17.391023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:38:17.391030 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:38:17.391036 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:38:17.391044 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:38:17.391050 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:38:17.391057 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:38:17.391063 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 13 21:38:17.391069 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:38:17.391077 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:38:17.391083 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:38:17.391090 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:38:17.391098 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:38:17.391105 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:38:17.391111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:38:17.391118 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:38:17.391125 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:38:17.391132 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:38:17.391139 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:38:17.391145 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:38:17.391152 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:38:17.391158 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:38:17.391166 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:38:17.391173 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:38:17.391180 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:38:17.391186 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:38:17.391193 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:38:17.391199 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:38:17.391206 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:38:17.391214 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:38:17.391221 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:38:17.391227 systemd[1]: Reached target machines.target - Containers. Jan 13 21:38:17.391234 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:38:17.391241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:38:17.391247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:38:17.391254 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:38:17.391261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:38:17.391267 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:38:17.391275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:38:17.391282 kernel: ACPI: bus type drm_connector registered Jan 13 21:38:17.391288 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:38:17.391294 kernel: fuse: init (API version 7.39) Jan 13 21:38:17.391300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:38:17.391307 kernel: loop: module loaded Jan 13 21:38:17.391313 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:38:17.391320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:38:17.391329 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:38:17.391335 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:38:17.391342 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:38:17.391349 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:38:17.391363 systemd-journald[1358]: Collecting audit messages is disabled. Jan 13 21:38:17.391378 systemd-journald[1358]: Journal started Jan 13 21:38:17.391392 systemd-journald[1358]: Runtime Journal (/run/log/journal/48f95e2af93b42479d38e3680ae33c16) is 8.0M, max 639.9M, 631.9M free. Jan 13 21:38:16.267481 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:38:16.283360 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 21:38:16.283589 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:38:17.404007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:38:17.425044 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:38:17.448066 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:38:17.469075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:38:17.491174 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:38:17.491206 systemd[1]: Stopped verity-setup.service. Jan 13 21:38:17.522979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:38:17.523000 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:38:17.533435 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:38:17.543144 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:38:17.554292 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:38:17.565272 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:38:17.575265 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:38:17.585371 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:38:17.595442 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:38:17.606516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:38:17.617371 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:38:17.617488 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:38:17.628427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:38:17.628568 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:38:17.639517 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:38:17.639698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:38:17.649757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:38:17.650056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:38:17.662897 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:38:17.663267 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:38:17.673862 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:38:17.674276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:38:17.684858 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:38:17.694977 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:38:17.706867 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:38:17.718865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:38:17.738342 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:38:17.757261 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:38:17.767973 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:38:17.777183 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:38:17.777214 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:38:17.788367 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:38:17.815668 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:38:17.830170 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:38:17.841258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:38:17.842311 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:38:17.852661 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:38:17.863140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:38:17.863898 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:38:17.870928 systemd-journald[1358]: Time spent on flushing to /var/log/journal/48f95e2af93b42479d38e3680ae33c16 is 13.439ms for 1359 entries. Jan 13 21:38:17.870928 systemd-journald[1358]: System Journal (/var/log/journal/48f95e2af93b42479d38e3680ae33c16) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:38:17.895794 systemd-journald[1358]: Received client request to flush runtime journal. Jan 13 21:38:17.879697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:38:17.880419 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:38:17.890857 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:38:17.902865 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:38:17.916010 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 21:38:17.918783 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:38:17.932215 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:38:17.945082 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:38:17.949241 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:38:17.960343 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:38:17.971231 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:38:17.986068 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 21:38:17.988335 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:38:17.999205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:38:18.009205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:38:18.022133 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:38:18.050021 kernel: loop2: detected capacity change from 0 to 8 Jan 13 21:38:18.050262 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:38:18.061764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:38:18.073654 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:38:18.074078 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:38:18.085576 udevadm[1394]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:38:18.089226 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Jan 13 21:38:18.089236 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Jan 13 21:38:18.091564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:38:18.110011 kernel: loop3: detected capacity change from 0 to 140992 Jan 13 21:38:18.161013 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 21:38:18.189054 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 21:38:18.190530 ldconfig[1384]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:38:18.191793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:38:18.208069 kernel: loop6: detected capacity change from 0 to 8 Jan 13 21:38:18.215075 kernel: loop7: detected capacity change from 0 to 140992 Jan 13 21:38:18.226837 (sd-merge)[1413]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 13 21:38:18.227135 (sd-merge)[1413]: Merged extensions into '/usr'. Jan 13 21:38:18.229250 systemd[1]: Reloading requested from client PID 1389 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:38:18.229257 systemd[1]: Reloading... Jan 13 21:38:18.249075 zram_generator::config[1438]: No configuration found. Jan 13 21:38:18.318653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:38:18.357213 systemd[1]: Reloading finished in 127 ms. Jan 13 21:38:18.382019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:38:18.393345 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:38:18.415288 systemd[1]: Starting ensure-sysext.service... Jan 13 21:38:18.422930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:38:18.435080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:38:18.447234 systemd-tmpfiles[1497]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:38:18.447609 systemd-tmpfiles[1497]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:38:18.448522 systemd-tmpfiles[1497]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:38:18.448829 systemd-tmpfiles[1497]: ACLs are not supported, ignoring. Jan 13 21:38:18.448892 systemd-tmpfiles[1497]: ACLs are not supported, ignoring. Jan 13 21:38:18.451733 systemd-tmpfiles[1497]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:38:18.451740 systemd-tmpfiles[1497]: Skipping /boot Jan 13 21:38:18.458945 systemd-tmpfiles[1497]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:38:18.458953 systemd-tmpfiles[1497]: Skipping /boot Jan 13 21:38:18.460118 systemd[1]: Reloading requested from client PID 1495 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:38:18.460131 systemd[1]: Reloading... Jan 13 21:38:18.481237 systemd-udevd[1498]: Using default interface naming scheme 'v255'. Jan 13 21:38:18.488139 zram_generator::config[1524]: No configuration found. Jan 13 21:38:18.534466 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 13 21:38:18.534548 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1558) Jan 13 21:38:18.534576 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 21:38:18.548014 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:38:18.548075 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:38:18.554077 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:38:18.565013 kernel: IPMI message handler: version 39.2 Jan 13 21:38:18.576014 kernel: ipmi device interface Jan 13 21:38:18.576079 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 13 21:38:18.606494 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 13 21:38:18.606609 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 13 21:38:18.606700 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 13 21:38:18.606794 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 13 21:38:18.612790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:38:18.659012 kernel: iTCO_vendor_support: vendor-support=0 Jan 13 21:38:18.659042 kernel: ipmi_si: IPMI System Interface driver Jan 13 21:38:18.669999 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 13 21:38:18.670144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 21:38:18.670535 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 13 21:38:18.670858 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 13 21:38:18.670874 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 13 21:38:18.670885 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 13 21:38:18.718791 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 13 21:38:18.718879 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 13 21:38:18.718979 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 13 21:38:18.718996 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 13 21:38:18.734456 systemd[1]: Reloading finished in 273 ms. Jan 13 21:38:18.750143 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 13 21:38:18.750437 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 13 21:38:18.769035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:38:18.769332 kernel: intel_rapl_common: Found RAPL domain package Jan 13 21:38:18.769350 kernel: intel_rapl_common: Found RAPL domain core Jan 13 21:38:18.770007 kernel: intel_rapl_common: Found RAPL domain dram Jan 13 21:38:18.770026 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 13 21:38:18.808228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:38:18.827006 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 13 21:38:18.837736 systemd[1]: Finished ensure-sysext.service. Jan 13 21:38:18.857201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:38:18.864008 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 13 21:38:18.871007 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 13 21:38:18.872183 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:38:18.881027 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:38:18.890785 augenrules[1698]: No rules Jan 13 21:38:18.892201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:38:18.902554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:38:18.912584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:38:18.922511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:38:18.944127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:38:18.954158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:38:18.954691 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:38:18.965662 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:38:18.976924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:38:18.977898 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:38:18.978785 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:38:19.009122 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:38:19.020670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:38:19.030093 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:38:19.030579 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:38:19.041261 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:38:19.041349 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:38:19.041612 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:38:19.041747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:38:19.041814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:38:19.041949 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:38:19.042016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:38:19.042153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:38:19.042216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:38:19.042349 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:38:19.042411 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:38:19.042544 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:38:19.042683 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:38:19.059205 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:38:19.059240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:38:19.059272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:38:19.059886 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:38:19.060773 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:38:19.060800 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:38:19.061014 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:38:19.066486 lvm[1726]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:38:19.071518 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:38:19.081850 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:38:19.112355 systemd-resolved[1711]: Positive Trust Anchors: Jan 13 21:38:19.112363 systemd-resolved[1711]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:38:19.112386 systemd-resolved[1711]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:38:19.115391 systemd-resolved[1711]: Using system hostname 'ci-4152.2.0-a-ed112912ac'. Jan 13 21:38:19.122120 systemd-networkd[1710]: lo: Link UP Jan 13 21:38:19.122124 systemd-networkd[1710]: lo: Gained carrier Jan 13 21:38:19.124649 systemd-networkd[1710]: bond0: netdev ready Jan 13 21:38:19.125632 systemd-networkd[1710]: Enumeration completed Jan 13 21:38:19.130097 systemd-networkd[1710]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:15:b7:74.network. Jan 13 21:38:19.163270 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:38:19.174286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:38:19.184083 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:38:19.193194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:38:19.204190 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:38:19.216337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:38:19.226076 systemd[1]: Reached target network.target - Network. Jan 13 21:38:19.234078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:38:19.245076 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:38:19.254122 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:38:19.265092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:38:19.276082 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:38:19.287189 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:38:19.287205 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:38:19.296207 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:38:19.306274 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:38:19.316342 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:38:19.328215 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:38:19.336875 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:38:19.348813 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:38:19.364106 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 21:38:19.365097 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:38:19.378077 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 13 21:38:19.380026 systemd-networkd[1710]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:15:b7:75.network. Jan 13 21:38:19.398441 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:38:19.409263 lvm[1752]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:38:19.412089 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:38:19.423828 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:38:19.433258 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:38:19.443102 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:38:19.451094 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:38:19.451111 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:38:19.459199 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:38:19.469972 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:38:19.479770 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:38:19.488868 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:38:19.496968 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:38:19.497241 coreos-metadata[1755]: Jan 13 21:38:19.497 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 21:38:19.498174 coreos-metadata[1755]: Jan 13 21:38:19.498 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jan 13 21:38:19.498911 jq[1759]: false Jan 13 21:38:19.501979 dbus-daemon[1756]: [system] SELinux support is enabled Jan 13 21:38:19.507262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:38:19.519298 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:38:19.528222 extend-filesystems[1760]: Found loop4 Jan 13 21:38:19.528222 extend-filesystems[1760]: Found loop5 Jan 13 21:38:19.589117 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 21:38:19.589234 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jan 13 21:38:19.589246 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 13 21:38:19.589254 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1577) Jan 13 21:38:19.589265 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 13 21:38:19.534337 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:38:19.589324 extend-filesystems[1760]: Found loop6 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found loop7 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda1 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda2 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda3 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found usr Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda4 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda6 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda7 Jan 13 21:38:19.589324 extend-filesystems[1760]: Found sda9 Jan 13 21:38:19.589324 extend-filesystems[1760]: Checking size of /dev/sda9 Jan 13 21:38:19.589324 extend-filesystems[1760]: Resized partition /dev/sda9 Jan 13 21:38:19.753142 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 13 21:38:19.753160 kernel: bond0: active interface up! Jan 13 21:38:19.563929 systemd-networkd[1710]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 13 21:38:19.753265 extend-filesystems[1775]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:38:19.565629 systemd-networkd[1710]: enp1s0f0np0: Link UP Jan 13 21:38:19.565814 systemd-networkd[1710]: enp1s0f0np0: Gained carrier Jan 13 21:38:19.575724 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:38:19.582298 systemd-networkd[1710]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:15:b7:74.network. Jan 13 21:38:19.768433 update_engine[1786]: I20250113 21:38:19.662564 1786 main.cc:92] Flatcar Update Engine starting Jan 13 21:38:19.768433 update_engine[1786]: I20250113 21:38:19.663306 1786 update_check_scheduler.cc:74] Next update check in 10m53s Jan 13 21:38:19.582469 systemd-networkd[1710]: enp1s0f1np1: Link UP Jan 13 21:38:19.775247 jq[1787]: true Jan 13 21:38:19.582634 systemd-networkd[1710]: enp1s0f1np1: Gained carrier Jan 13 21:38:19.592179 systemd-networkd[1710]: bond0: Link UP Jan 13 21:38:19.592366 systemd-networkd[1710]: bond0: Gained carrier Jan 13 21:38:19.592489 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:19.592793 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:19.592982 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:19.593080 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:19.596140 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:38:19.602467 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:38:19.629937 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 13 21:38:19.636469 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:38:19.636797 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:38:19.643501 systemd-logind[1781]: Watching system buttons on /dev/input/event3 (Power Button) Jan 13 21:38:19.643519 systemd-logind[1781]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 21:38:19.643530 systemd-logind[1781]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 13 21:38:19.643747 systemd-logind[1781]: New seat seat0. Jan 13 21:38:19.667744 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:38:19.674442 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:38:19.717476 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:38:19.723311 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:38:19.748236 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:38:19.748327 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:38:19.748473 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:38:19.748559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:38:19.753596 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:38:19.753683 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:38:19.780572 dbus-daemon[1756]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:38:19.785905 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 13 21:38:19.786016 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 13 21:38:19.786535 (ntainerd)[1791]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:38:19.788268 jq[1790]: true Jan 13 21:38:19.790269 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:38:19.803009 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 13 21:38:19.808382 tar[1789]: linux-amd64/helm Jan 13 21:38:19.809578 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:38:19.809737 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:38:19.820149 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:38:19.820280 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:38:19.831142 sshd_keygen[1784]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:38:19.832010 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:38:19.844449 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:38:19.855315 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:38:19.867997 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:38:19.868106 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:38:19.868449 locksmithd[1822]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:38:19.878435 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:38:19.891653 bash[1820]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:38:19.892536 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:38:19.903410 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:38:19.927221 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:38:19.935857 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 13 21:38:19.945208 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:38:19.961360 containerd[1791]: time="2025-01-13T21:38:19.961312415Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 21:38:19.967198 systemd[1]: Starting sshkeys.service... Jan 13 21:38:19.973987 containerd[1791]: time="2025-01-13T21:38:19.973968124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974708 containerd[1791]: time="2025-01-13T21:38:19.974693266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974708 containerd[1791]: time="2025-01-13T21:38:19.974707539Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:38:19.974748 containerd[1791]: time="2025-01-13T21:38:19.974717122Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:38:19.974816 containerd[1791]: time="2025-01-13T21:38:19.974807986Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:38:19.974836 containerd[1791]: time="2025-01-13T21:38:19.974819675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974861 containerd[1791]: time="2025-01-13T21:38:19.974852869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974877 containerd[1791]: time="2025-01-13T21:38:19.974861436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974962 containerd[1791]: time="2025-01-13T21:38:19.974953249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974978 containerd[1791]: time="2025-01-13T21:38:19.974962641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974978 containerd[1791]: time="2025-01-13T21:38:19.974970058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:38:19.974978 containerd[1791]: time="2025-01-13T21:38:19.974975294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:38:19.975035 containerd[1791]: time="2025-01-13T21:38:19.975023962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:38:19.975288 containerd[1791]: time="2025-01-13T21:38:19.975251843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:38:19.975349 containerd[1791]: time="2025-01-13T21:38:19.975311665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:38:19.975349 containerd[1791]: time="2025-01-13T21:38:19.975326236Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:38:19.975384 containerd[1791]: time="2025-01-13T21:38:19.975375043Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:38:19.975412 containerd[1791]: time="2025-01-13T21:38:19.975405912Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:38:19.982771 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:38:19.987718 containerd[1791]: time="2025-01-13T21:38:19.987677422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:38:19.987718 containerd[1791]: time="2025-01-13T21:38:19.987700778Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:38:19.987718 containerd[1791]: time="2025-01-13T21:38:19.987712695Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:38:19.987787 containerd[1791]: time="2025-01-13T21:38:19.987722332Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:38:19.987787 containerd[1791]: time="2025-01-13T21:38:19.987730248Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:38:19.987816 containerd[1791]: time="2025-01-13T21:38:19.987803050Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:38:19.987942 containerd[1791]: time="2025-01-13T21:38:19.987933881Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:38:19.987997 containerd[1791]: time="2025-01-13T21:38:19.987990381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:38:19.988022 containerd[1791]: time="2025-01-13T21:38:19.988000760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:38:19.988022 containerd[1791]: time="2025-01-13T21:38:19.988017194Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:38:19.988051 containerd[1791]: time="2025-01-13T21:38:19.988025571Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988051 containerd[1791]: time="2025-01-13T21:38:19.988033100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988051 containerd[1791]: time="2025-01-13T21:38:19.988041057Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988051 containerd[1791]: time="2025-01-13T21:38:19.988048999Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988101 containerd[1791]: time="2025-01-13T21:38:19.988057098Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988101 containerd[1791]: time="2025-01-13T21:38:19.988064530Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988101 containerd[1791]: time="2025-01-13T21:38:19.988071496Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988101 containerd[1791]: time="2025-01-13T21:38:19.988078008Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:38:19.988101 containerd[1791]: time="2025-01-13T21:38:19.988088436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988101 containerd[1791]: time="2025-01-13T21:38:19.988096262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988105131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988112471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988119191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988126299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988132628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988139295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988146364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988154747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988162430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988169567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988180 containerd[1791]: time="2025-01-13T21:38:19.988176100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988323 containerd[1791]: time="2025-01-13T21:38:19.988183624Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:38:19.988323 containerd[1791]: time="2025-01-13T21:38:19.988196483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988323 containerd[1791]: time="2025-01-13T21:38:19.988204260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988323 containerd[1791]: time="2025-01-13T21:38:19.988210279Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:38:19.988567 containerd[1791]: time="2025-01-13T21:38:19.988555468Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:38:19.988583 containerd[1791]: time="2025-01-13T21:38:19.988572679Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:38:19.988583 containerd[1791]: time="2025-01-13T21:38:19.988579585Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:38:19.988616 containerd[1791]: time="2025-01-13T21:38:19.988586633Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:38:19.988616 containerd[1791]: time="2025-01-13T21:38:19.988592045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988616 containerd[1791]: time="2025-01-13T21:38:19.988599622Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:38:19.988616 containerd[1791]: time="2025-01-13T21:38:19.988606422Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:38:19.988669 containerd[1791]: time="2025-01-13T21:38:19.988617915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:38:19.988805 containerd[1791]: time="2025-01-13T21:38:19.988782486Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:38:19.988885 containerd[1791]: time="2025-01-13T21:38:19.988811902Z" level=info msg="Connect containerd service" Jan 13 21:38:19.988885 containerd[1791]: time="2025-01-13T21:38:19.988829714Z" level=info msg="using legacy CRI server" Jan 13 21:38:19.988885 containerd[1791]: time="2025-01-13T21:38:19.988834428Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:38:19.989197 containerd[1791]: time="2025-01-13T21:38:19.989174528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:38:19.989566 containerd[1791]: time="2025-01-13T21:38:19.989554963Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:38:19.989665 containerd[1791]: time="2025-01-13T21:38:19.989644424Z" level=info msg="Start subscribing containerd event" Jan 13 21:38:19.989695 containerd[1791]: time="2025-01-13T21:38:19.989678185Z" level=info msg="Start recovering state" Jan 13 21:38:19.989735 containerd[1791]: time="2025-01-13T21:38:19.989726809Z" level=info msg="Start event monitor" Jan 13 21:38:19.989765 containerd[1791]: time="2025-01-13T21:38:19.989736210Z" level=info msg="Start snapshots syncer" Jan 13 21:38:19.989765 containerd[1791]: time="2025-01-13T21:38:19.989745595Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:38:19.989765 containerd[1791]: time="2025-01-13T21:38:19.989752759Z" level=info msg="Start streaming server" Jan 13 21:38:19.989836 containerd[1791]: time="2025-01-13T21:38:19.989800774Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:38:19.989836 containerd[1791]: time="2025-01-13T21:38:19.989826539Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:38:19.989893 containerd[1791]: time="2025-01-13T21:38:19.989852715Z" level=info msg="containerd successfully booted in 0.029324s" Jan 13 21:38:20.013240 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:38:20.024101 coreos-metadata[1867]: Jan 13 21:38:20.024 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 21:38:20.024288 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:38:20.057037 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jan 13 21:38:20.080973 extend-filesystems[1775]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 21:38:20.080973 extend-filesystems[1775]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 13 21:38:20.080973 extend-filesystems[1775]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jan 13 21:38:20.122117 extend-filesystems[1760]: Resized filesystem in /dev/sda9 Jan 13 21:38:20.122117 extend-filesystems[1760]: Found sdb Jan 13 21:38:20.138046 tar[1789]: linux-amd64/LICENSE Jan 13 21:38:20.138046 tar[1789]: linux-amd64/README.md Jan 13 21:38:20.081433 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:38:20.081530 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:38:20.137948 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:38:20.498314 coreos-metadata[1755]: Jan 13 21:38:20.498 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jan 13 21:38:20.961163 systemd-networkd[1710]: bond0: Gained IPv6LL Jan 13 21:38:20.961402 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:21.473304 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:21.473493 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:21.475116 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:38:21.486436 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:38:21.507234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:38:21.517706 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:38:21.535636 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:38:22.153863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:22.171109 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 13 21:38:22.171232 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 13 21:38:22.191517 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:38:22.252037 kernel: mlx5_core 0000:01:00.0: lag map: port 1:2 port 2:2 Jan 13 21:38:22.296034 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 13 21:38:22.736265 kubelet[1892]: E0113 21:38:22.736195 1892 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:38:22.737508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:38:22.737583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:38:23.185867 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:38:23.203360 systemd[1]: Started sshd@0-86.109.11.45:22-147.75.109.163:41872.service - OpenSSH per-connection server daemon (147.75.109.163:41872). Jan 13 21:38:23.250656 sshd[1915]: Accepted publickey for core from 147.75.109.163 port 41872 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:23.252155 sshd-session[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:23.257856 systemd-logind[1781]: New session 1 of user core. Jan 13 21:38:23.258662 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:38:23.286338 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:38:23.298939 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:38:23.326681 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:38:23.346282 (systemd)[1919]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:38:23.432612 systemd[1919]: Queued start job for default target default.target. Jan 13 21:38:23.444661 systemd[1919]: Created slice app.slice - User Application Slice. Jan 13 21:38:23.444675 systemd[1919]: Reached target paths.target - Paths. Jan 13 21:38:23.444684 systemd[1919]: Reached target timers.target - Timers. Jan 13 21:38:23.445313 systemd[1919]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:38:23.450776 systemd[1919]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:38:23.450805 systemd[1919]: Reached target sockets.target - Sockets. Jan 13 21:38:23.450814 systemd[1919]: Reached target basic.target - Basic System. Jan 13 21:38:23.450835 systemd[1919]: Reached target default.target - Main User Target. Jan 13 21:38:23.450851 systemd[1919]: Startup finished in 95ms. Jan 13 21:38:23.450949 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:38:23.462067 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:38:23.536298 systemd[1]: Started sshd@1-86.109.11.45:22-147.75.109.163:41888.service - OpenSSH per-connection server daemon (147.75.109.163:41888). Jan 13 21:38:23.573080 sshd[1930]: Accepted publickey for core from 147.75.109.163 port 41888 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:23.573698 sshd-session[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:23.576298 systemd-logind[1781]: New session 2 of user core. Jan 13 21:38:23.577062 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:38:23.637305 sshd[1932]: Connection closed by 147.75.109.163 port 41888 Jan 13 21:38:23.637444 sshd-session[1930]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:23.650822 systemd[1]: sshd@1-86.109.11.45:22-147.75.109.163:41888.service: Deactivated successfully. Jan 13 21:38:23.651659 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:38:23.652453 systemd-logind[1781]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:38:23.653124 systemd[1]: Started sshd@2-86.109.11.45:22-147.75.109.163:41900.service - OpenSSH per-connection server daemon (147.75.109.163:41900). Jan 13 21:38:23.666092 systemd-logind[1781]: Removed session 2. Jan 13 21:38:23.702205 sshd[1937]: Accepted publickey for core from 147.75.109.163 port 41900 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:23.703525 sshd-session[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:23.708955 systemd-logind[1781]: New session 3 of user core. Jan 13 21:38:23.718491 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:38:23.790259 coreos-metadata[1867]: Jan 13 21:38:23.790 INFO Fetch successful Jan 13 21:38:23.797901 sshd[1939]: Connection closed by 147.75.109.163 port 41900 Jan 13 21:38:23.798658 sshd-session[1937]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:23.804891 systemd[1]: sshd@2-86.109.11.45:22-147.75.109.163:41900.service: Deactivated successfully. Jan 13 21:38:23.809169 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:38:23.812503 systemd-logind[1781]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:38:23.815338 systemd-logind[1781]: Removed session 3. Jan 13 21:38:23.871178 unknown[1867]: wrote ssh authorized keys file for user: core Jan 13 21:38:23.896750 update-ssh-keys[1943]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:38:23.897011 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:38:23.908872 systemd[1]: Finished sshkeys.service. Jan 13 21:38:23.972524 coreos-metadata[1755]: Jan 13 21:38:23.972 INFO Fetch successful Jan 13 21:38:24.060897 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:38:24.072365 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 13 21:38:24.402864 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 13 21:38:24.416683 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:38:24.426779 systemd[1]: Startup finished in 2.676s (kernel) + 21.046s (initrd) + 8.618s (userspace) = 32.341s. Jan 13 21:38:24.445523 login[1850]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:38:24.448492 systemd-logind[1781]: New session 4 of user core. Jan 13 21:38:24.449341 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:38:24.452166 login[1849]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:38:24.454804 systemd-logind[1781]: New session 5 of user core. Jan 13 21:38:24.455331 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:38:32.947641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:38:32.960263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:38:33.170602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:33.172768 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:38:33.199628 kubelet[1986]: E0113 21:38:33.199518 1986 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:38:33.201931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:38:33.202019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:38:33.814470 systemd[1]: Started sshd@3-86.109.11.45:22-147.75.109.163:37440.service - OpenSSH per-connection server daemon (147.75.109.163:37440). Jan 13 21:38:33.843027 sshd[2002]: Accepted publickey for core from 147.75.109.163 port 37440 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:33.843810 sshd-session[2002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:33.846917 systemd-logind[1781]: New session 6 of user core. Jan 13 21:38:33.863430 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:38:33.927740 sshd[2004]: Connection closed by 147.75.109.163 port 37440 Jan 13 21:38:33.928497 sshd-session[2002]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:33.957193 systemd[1]: sshd@3-86.109.11.45:22-147.75.109.163:37440.service: Deactivated successfully. Jan 13 21:38:33.958077 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:38:33.958734 systemd-logind[1781]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:38:33.959360 systemd[1]: Started sshd@4-86.109.11.45:22-147.75.109.163:37442.service - OpenSSH per-connection server daemon (147.75.109.163:37442). Jan 13 21:38:33.959956 systemd-logind[1781]: Removed session 6. Jan 13 21:38:33.988437 sshd[2009]: Accepted publickey for core from 147.75.109.163 port 37442 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:33.989147 sshd-session[2009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:33.991883 systemd-logind[1781]: New session 7 of user core. Jan 13 21:38:34.004256 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:38:34.057332 sshd[2011]: Connection closed by 147.75.109.163 port 37442 Jan 13 21:38:34.058146 sshd-session[2009]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:34.074822 systemd[1]: sshd@4-86.109.11.45:22-147.75.109.163:37442.service: Deactivated successfully. Jan 13 21:38:34.078651 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:38:34.082253 systemd-logind[1781]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:38:34.099774 systemd[1]: Started sshd@5-86.109.11.45:22-147.75.109.163:37448.service - OpenSSH per-connection server daemon (147.75.109.163:37448). Jan 13 21:38:34.102742 systemd-logind[1781]: Removed session 7. Jan 13 21:38:34.131372 sshd[2016]: Accepted publickey for core from 147.75.109.163 port 37448 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:34.131970 sshd-session[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:34.134551 systemd-logind[1781]: New session 8 of user core. Jan 13 21:38:34.135070 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:38:34.188939 sshd[2019]: Connection closed by 147.75.109.163 port 37448 Jan 13 21:38:34.189487 sshd-session[2016]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:34.204792 systemd[1]: sshd@5-86.109.11.45:22-147.75.109.163:37448.service: Deactivated successfully. Jan 13 21:38:34.208709 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:38:34.212122 systemd-logind[1781]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:38:34.235537 systemd[1]: Started sshd@6-86.109.11.45:22-147.75.109.163:37454.service - OpenSSH per-connection server daemon (147.75.109.163:37454). Jan 13 21:38:34.236755 systemd-logind[1781]: Removed session 8. Jan 13 21:38:34.276166 sshd[2024]: Accepted publickey for core from 147.75.109.163 port 37454 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:34.277493 sshd-session[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:34.282105 systemd-logind[1781]: New session 9 of user core. Jan 13 21:38:34.305336 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:38:34.364952 sudo[2027]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:38:34.365106 sudo[2027]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:38:34.375669 sudo[2027]: pam_unix(sudo:session): session closed for user root Jan 13 21:38:34.376430 sshd[2026]: Connection closed by 147.75.109.163 port 37454 Jan 13 21:38:34.376603 sshd-session[2024]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:34.396153 systemd[1]: sshd@6-86.109.11.45:22-147.75.109.163:37454.service: Deactivated successfully. Jan 13 21:38:34.397198 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:38:34.398181 systemd-logind[1781]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:38:34.399141 systemd[1]: Started sshd@7-86.109.11.45:22-147.75.109.163:37458.service - OpenSSH per-connection server daemon (147.75.109.163:37458). Jan 13 21:38:34.400013 systemd-logind[1781]: Removed session 9. Jan 13 21:38:34.441511 sshd[2032]: Accepted publickey for core from 147.75.109.163 port 37458 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:34.442698 sshd-session[2032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:34.446894 systemd-logind[1781]: New session 10 of user core. Jan 13 21:38:34.466563 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:38:34.527029 sudo[2036]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:38:34.527180 sudo[2036]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:38:34.529183 sudo[2036]: pam_unix(sudo:session): session closed for user root Jan 13 21:38:34.531823 sudo[2035]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 21:38:34.531972 sudo[2035]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:38:34.553366 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:38:34.578807 augenrules[2058]: No rules Jan 13 21:38:34.579462 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:38:34.579564 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:38:34.580172 sudo[2035]: pam_unix(sudo:session): session closed for user root Jan 13 21:38:34.581102 sshd[2034]: Connection closed by 147.75.109.163 port 37458 Jan 13 21:38:34.581242 sshd-session[2032]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:34.591697 systemd[1]: sshd@7-86.109.11.45:22-147.75.109.163:37458.service: Deactivated successfully. Jan 13 21:38:34.592482 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:38:34.593312 systemd-logind[1781]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:38:34.593994 systemd[1]: Started sshd@8-86.109.11.45:22-147.75.109.163:37462.service - OpenSSH per-connection server daemon (147.75.109.163:37462). Jan 13 21:38:34.594631 systemd-logind[1781]: Removed session 10. Jan 13 21:38:34.624964 sshd[2066]: Accepted publickey for core from 147.75.109.163 port 37462 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:38:34.625695 sshd-session[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:38:34.628660 systemd-logind[1781]: New session 11 of user core. Jan 13 21:38:34.638246 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:38:34.701150 sudo[2069]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:38:34.701978 sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:38:35.084690 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:38:35.084873 (dockerd)[2094]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:38:35.369140 dockerd[2094]: time="2025-01-13T21:38:35.368986347Z" level=info msg="Starting up" Jan 13 21:38:35.434928 dockerd[2094]: time="2025-01-13T21:38:35.434880322Z" level=info msg="Loading containers: start." Jan 13 21:38:35.570081 kernel: Initializing XFRM netlink socket Jan 13 21:38:35.585155 systemd-timesyncd[1712]: Network configuration changed, trying to establish connection. Jan 13 21:38:35.630632 systemd-networkd[1710]: docker0: Link UP Jan 13 21:38:35.663111 dockerd[2094]: time="2025-01-13T21:38:35.663061342Z" level=info msg="Loading containers: done." Jan 13 21:38:35.673293 dockerd[2094]: time="2025-01-13T21:38:35.673247301Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:38:35.673363 dockerd[2094]: time="2025-01-13T21:38:35.673294115Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 21:38:35.673363 dockerd[2094]: time="2025-01-13T21:38:35.673346194Z" level=info msg="Daemon has completed initialization" Jan 13 21:38:35.673560 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3358653121-merged.mount: Deactivated successfully. Jan 13 21:38:35.687386 dockerd[2094]: time="2025-01-13T21:38:35.687344136Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:38:35.687463 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:38:35.989188 systemd-timesyncd[1712]: Contacted time server [2604:2dc0:101:200::b9d]:123 (2.flatcar.pool.ntp.org). Jan 13 21:38:35.989258 systemd-timesyncd[1712]: Initial clock synchronization to Mon 2025-01-13 21:38:35.996249 UTC. Jan 13 21:38:36.507746 containerd[1791]: time="2025-01-13T21:38:36.507722294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:38:37.054523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111747300.mount: Deactivated successfully. Jan 13 21:38:37.968182 containerd[1791]: time="2025-01-13T21:38:37.968127687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:37.968379 containerd[1791]: time="2025-01-13T21:38:37.968247634Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 21:38:37.968766 containerd[1791]: time="2025-01-13T21:38:37.968723407Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:37.970410 containerd[1791]: time="2025-01-13T21:38:37.970369434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:37.971488 containerd[1791]: time="2025-01-13T21:38:37.971446193Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.463701301s" Jan 13 21:38:37.971488 containerd[1791]: time="2025-01-13T21:38:37.971465116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:38:37.982899 containerd[1791]: time="2025-01-13T21:38:37.982881576Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:38:38.960442 systemd[1]: Started sshd@9-86.109.11.45:22-218.92.0.140:29806.service - OpenSSH per-connection server daemon (218.92.0.140:29806). Jan 13 21:38:39.137541 containerd[1791]: time="2025-01-13T21:38:39.137483751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:39.137746 containerd[1791]: time="2025-01-13T21:38:39.137710529Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 21:38:39.138133 containerd[1791]: time="2025-01-13T21:38:39.138091482Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:39.139688 containerd[1791]: time="2025-01-13T21:38:39.139645222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:39.140759 containerd[1791]: time="2025-01-13T21:38:39.140716779Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.157815871s" Jan 13 21:38:39.140759 containerd[1791]: time="2025-01-13T21:38:39.140734798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:38:39.151997 containerd[1791]: time="2025-01-13T21:38:39.151977119Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:38:39.997657 containerd[1791]: time="2025-01-13T21:38:39.997608308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:39.997839 containerd[1791]: time="2025-01-13T21:38:39.997796411Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 21:38:39.998242 containerd[1791]: time="2025-01-13T21:38:39.998208139Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:40.000062 containerd[1791]: time="2025-01-13T21:38:40.000017793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:40.000505 containerd[1791]: time="2025-01-13T21:38:40.000480593Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 848.483327ms" Jan 13 21:38:40.000505 containerd[1791]: time="2025-01-13T21:38:40.000493990Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:38:40.010919 containerd[1791]: time="2025-01-13T21:38:40.010900788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:38:40.061948 sshd-session[2405]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:38:40.920675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571934667.mount: Deactivated successfully. Jan 13 21:38:41.081980 containerd[1791]: time="2025-01-13T21:38:41.081927479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:41.082197 containerd[1791]: time="2025-01-13T21:38:41.082156537Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:38:41.082522 containerd[1791]: time="2025-01-13T21:38:41.082482816Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:41.083397 containerd[1791]: time="2025-01-13T21:38:41.083357247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:41.084107 containerd[1791]: time="2025-01-13T21:38:41.084064220Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.073144187s" Jan 13 21:38:41.084107 containerd[1791]: time="2025-01-13T21:38:41.084078626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:38:41.095287 containerd[1791]: time="2025-01-13T21:38:41.095219271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:38:41.538296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830127854.mount: Deactivated successfully. Jan 13 21:38:42.043100 sshd[2388]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:38:42.078715 containerd[1791]: time="2025-01-13T21:38:42.078691353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:42.078897 containerd[1791]: time="2025-01-13T21:38:42.078874855Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:38:42.079362 containerd[1791]: time="2025-01-13T21:38:42.079339965Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:42.080889 containerd[1791]: time="2025-01-13T21:38:42.080874365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:42.081578 containerd[1791]: time="2025-01-13T21:38:42.081564860Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 986.325239ms" Jan 13 21:38:42.081614 containerd[1791]: time="2025-01-13T21:38:42.081579886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:38:42.092655 containerd[1791]: time="2025-01-13T21:38:42.092636617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:38:42.115332 systemd[1]: Started sshd@10-86.109.11.45:22-193.32.162.132:34274.service - OpenSSH per-connection server daemon (193.32.162.132:34274). Jan 13 21:38:42.331854 sshd-session[2497]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:38:42.542275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687892972.mount: Deactivated successfully. Jan 13 21:38:42.543494 containerd[1791]: time="2025-01-13T21:38:42.543448384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:42.543687 containerd[1791]: time="2025-01-13T21:38:42.543649230Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:38:42.544105 containerd[1791]: time="2025-01-13T21:38:42.544042553Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:42.545106 containerd[1791]: time="2025-01-13T21:38:42.545063467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:42.545556 containerd[1791]: time="2025-01-13T21:38:42.545511365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 452.843104ms" Jan 13 21:38:42.545556 containerd[1791]: time="2025-01-13T21:38:42.545525737Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:38:42.556406 containerd[1791]: time="2025-01-13T21:38:42.556348318Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:38:42.808096 sshd[2495]: Invalid user validator from 193.32.162.132 port 34274 Jan 13 21:38:42.976859 sshd[2495]: Connection closed by invalid user validator 193.32.162.132 port 34274 [preauth] Jan 13 21:38:42.977639 systemd[1]: sshd@10-86.109.11.45:22-193.32.162.132:34274.service: Deactivated successfully. Jan 13 21:38:43.099743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862258038.mount: Deactivated successfully. Jan 13 21:38:43.446181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:38:43.459153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:38:43.708576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:43.710775 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:38:43.771424 kubelet[2562]: E0113 21:38:43.771395 2562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:38:43.773339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:38:43.773466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:38:44.057182 sshd[2388]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:38:44.205212 containerd[1791]: time="2025-01-13T21:38:44.205156026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:44.205404 containerd[1791]: time="2025-01-13T21:38:44.205361397Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 21:38:44.205796 containerd[1791]: time="2025-01-13T21:38:44.205757163Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:44.207773 containerd[1791]: time="2025-01-13T21:38:44.207732146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:38:44.208336 containerd[1791]: time="2025-01-13T21:38:44.208294195Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.651926483s" Jan 13 21:38:44.208336 containerd[1791]: time="2025-01-13T21:38:44.208310886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:38:44.345235 sshd-session[2589]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:38:46.029445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:46.053477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:38:46.063670 systemd[1]: Reloading requested from client PID 2736 ('systemctl') (unit session-11.scope)... Jan 13 21:38:46.063678 systemd[1]: Reloading... Jan 13 21:38:46.099073 zram_generator::config[2777]: No configuration found. Jan 13 21:38:46.165987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:38:46.225582 systemd[1]: Reloading finished in 161 ms. Jan 13 21:38:46.264527 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:38:46.264571 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:38:46.264682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:46.277366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:38:46.479152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:46.481506 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:38:46.506478 kubelet[2840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:38:46.506478 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:38:46.506478 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:38:46.506689 kubelet[2840]: I0113 21:38:46.506503 2840 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:38:46.675813 sshd[2388]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:38:46.743063 kubelet[2840]: I0113 21:38:46.742957 2840 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:38:46.743063 kubelet[2840]: I0113 21:38:46.742971 2840 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:38:46.743221 kubelet[2840]: I0113 21:38:46.743168 2840 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:38:46.755877 kubelet[2840]: I0113 21:38:46.755859 2840 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:38:46.756821 kubelet[2840]: E0113 21:38:46.756784 2840 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://86.109.11.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.768368 kubelet[2840]: I0113 21:38:46.768334 2840 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:38:46.769381 kubelet[2840]: I0113 21:38:46.769374 2840 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:38:46.769499 kubelet[2840]: I0113 21:38:46.769463 2840 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:38:46.769845 kubelet[2840]: I0113 21:38:46.769810 2840 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:38:46.769845 kubelet[2840]: I0113 21:38:46.769818 2840 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:38:46.769894 kubelet[2840]: I0113 21:38:46.769863 2840 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:38:46.769911 kubelet[2840]: I0113 21:38:46.769905 2840 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:38:46.769926 kubelet[2840]: I0113 21:38:46.769913 2840 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:38:46.769943 kubelet[2840]: I0113 21:38:46.769926 2840 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:38:46.769943 kubelet[2840]: I0113 21:38:46.769934 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:38:46.770439 kubelet[2840]: W0113 21:38:46.770368 2840 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://86.109.11.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ed112912ac&limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.770439 kubelet[2840]: E0113 21:38:46.770413 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://86.109.11.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ed112912ac&limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.770962 kubelet[2840]: I0113 21:38:46.770930 2840 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:38:46.772640 kubelet[2840]: W0113 21:38:46.772588 2840 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://86.109.11.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.772640 kubelet[2840]: E0113 21:38:46.772629 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://86.109.11.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.773531 kubelet[2840]: I0113 21:38:46.773494 2840 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:38:46.774413 kubelet[2840]: W0113 21:38:46.774373 2840 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:38:46.774965 kubelet[2840]: I0113 21:38:46.774915 2840 server.go:1256] "Started kubelet" Jan 13 21:38:46.775019 kubelet[2840]: I0113 21:38:46.775008 2840 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:38:46.775019 kubelet[2840]: I0113 21:38:46.775015 2840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:38:46.775233 kubelet[2840]: I0113 21:38:46.775165 2840 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:38:46.775843 kubelet[2840]: I0113 21:38:46.775798 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:38:46.775915 kubelet[2840]: I0113 21:38:46.775884 2840 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:38:46.775946 kubelet[2840]: I0113 21:38:46.775915 2840 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:38:46.775946 kubelet[2840]: I0113 21:38:46.775919 2840 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:38:46.776000 kubelet[2840]: I0113 21:38:46.775975 2840 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:38:46.776112 kubelet[2840]: W0113 21:38:46.776086 2840 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://86.109.11.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.776138 kubelet[2840]: E0113 21:38:46.776122 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://86.109.11.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.776518 kubelet[2840]: I0113 21:38:46.776510 2840 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:38:46.776560 kubelet[2840]: I0113 21:38:46.776552 2840 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:38:46.776931 kubelet[2840]: I0113 21:38:46.776924 2840 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:38:46.777545 kubelet[2840]: E0113 21:38:46.777535 2840 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:38:46.777763 kubelet[2840]: E0113 21:38:46.777753 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ed112912ac?timeout=10s\": dial tcp 86.109.11.45:6443: connect: connection refused" interval="200ms" Jan 13 21:38:46.780452 kubelet[2840]: E0113 21:38:46.780093 2840 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://86.109.11.45:6443/api/v1/namespaces/default/events\": dial tcp 86.109.11.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-ed112912ac.181a5e554dc7904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-ed112912ac,UID:ci-4152.2.0-a-ed112912ac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-ed112912ac,},FirstTimestamp:2025-01-13 21:38:46.774902861 +0000 UTC m=+0.291339237,LastTimestamp:2025-01-13 21:38:46.774902861 +0000 UTC m=+0.291339237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-ed112912ac,}" Jan 13 21:38:46.784675 kubelet[2840]: I0113 21:38:46.784665 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:38:46.785265 kubelet[2840]: I0113 21:38:46.785255 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:38:46.785313 kubelet[2840]: I0113 21:38:46.785286 2840 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:38:46.785313 kubelet[2840]: I0113 21:38:46.785295 2840 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:38:46.785348 kubelet[2840]: E0113 21:38:46.785320 2840 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:38:46.785750 kubelet[2840]: W0113 21:38:46.785694 2840 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://86.109.11.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.785793 kubelet[2840]: E0113 21:38:46.785756 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://86.109.11.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:46.819539 sshd[2388]: Received disconnect from 218.92.0.140 port 29806:11: [preauth] Jan 13 21:38:46.819539 sshd[2388]: Disconnected from authenticating user root 218.92.0.140 port 29806 [preauth] Jan 13 21:38:46.822750 systemd[1]: sshd@9-86.109.11.45:22-218.92.0.140:29806.service: Deactivated successfully. Jan 13 21:38:46.853416 kubelet[2840]: I0113 21:38:46.853317 2840 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:38:46.853416 kubelet[2840]: I0113 21:38:46.853375 2840 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:38:46.853416 kubelet[2840]: I0113 21:38:46.853432 2840 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:38:46.867059 kubelet[2840]: I0113 21:38:46.867010 2840 policy_none.go:49] "None policy: Start" Jan 13 21:38:46.867594 kubelet[2840]: I0113 21:38:46.867546 2840 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:38:46.867594 kubelet[2840]: I0113 21:38:46.867577 2840 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:38:46.870300 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:38:46.877758 kubelet[2840]: I0113 21:38:46.877725 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:46.877943 kubelet[2840]: E0113 21:38:46.877906 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://86.109.11.45:6443/api/v1/nodes\": dial tcp 86.109.11.45:6443: connect: connection refused" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:46.886213 kubelet[2840]: E0113 21:38:46.886171 2840 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:38:46.895103 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:38:46.897298 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:38:46.909897 kubelet[2840]: I0113 21:38:46.909853 2840 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:38:46.910143 kubelet[2840]: I0113 21:38:46.910093 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:38:46.910941 kubelet[2840]: E0113 21:38:46.910926 2840 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:46.979561 kubelet[2840]: E0113 21:38:46.979499 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ed112912ac?timeout=10s\": dial tcp 86.109.11.45:6443: connect: connection refused" interval="400ms" Jan 13 21:38:47.082590 kubelet[2840]: I0113 21:38:47.082396 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.083253 kubelet[2840]: E0113 21:38:47.083194 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://86.109.11.45:6443/api/v1/nodes\": dial tcp 86.109.11.45:6443: connect: connection refused" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.087331 kubelet[2840]: I0113 21:38:47.087249 2840 topology_manager.go:215] "Topology Admit Handler" podUID="a1f5da8232806caa51a8ab6b1e1c144d" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.088371 kubelet[2840]: I0113 21:38:47.088361 2840 topology_manager.go:215] "Topology Admit Handler" podUID="ec93f03f291329e89a0e4f27d7b53c26" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.089324 kubelet[2840]: I0113 21:38:47.089262 2840 topology_manager.go:215] "Topology Admit Handler" podUID="9e2504d94cc43dd2957a51a1bf937525" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.092505 systemd[1]: Created slice kubepods-burstable-poda1f5da8232806caa51a8ab6b1e1c144d.slice - libcontainer container kubepods-burstable-poda1f5da8232806caa51a8ab6b1e1c144d.slice. Jan 13 21:38:47.128321 systemd[1]: Created slice kubepods-burstable-podec93f03f291329e89a0e4f27d7b53c26.slice - libcontainer container kubepods-burstable-podec93f03f291329e89a0e4f27d7b53c26.slice. Jan 13 21:38:47.140762 systemd[1]: Created slice kubepods-burstable-pod9e2504d94cc43dd2957a51a1bf937525.slice - libcontainer container kubepods-burstable-pod9e2504d94cc43dd2957a51a1bf937525.slice. Jan 13 21:38:47.178776 kubelet[2840]: I0113 21:38:47.178700 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179089 kubelet[2840]: I0113 21:38:47.178819 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e2504d94cc43dd2957a51a1bf937525-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ed112912ac\" (UID: \"9e2504d94cc43dd2957a51a1bf937525\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179089 kubelet[2840]: I0113 21:38:47.179033 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179334 kubelet[2840]: I0113 21:38:47.179141 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179334 kubelet[2840]: I0113 21:38:47.179288 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179604 kubelet[2840]: I0113 21:38:47.179471 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec93f03f291329e89a0e4f27d7b53c26-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-ed112912ac\" (UID: \"ec93f03f291329e89a0e4f27d7b53c26\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179718 kubelet[2840]: I0113 21:38:47.179620 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e2504d94cc43dd2957a51a1bf937525-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ed112912ac\" (UID: \"9e2504d94cc43dd2957a51a1bf937525\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179839 kubelet[2840]: I0113 21:38:47.179742 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e2504d94cc43dd2957a51a1bf937525-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-ed112912ac\" (UID: \"9e2504d94cc43dd2957a51a1bf937525\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.179952 kubelet[2840]: I0113 21:38:47.179840 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.381418 kubelet[2840]: E0113 21:38:47.381195 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ed112912ac?timeout=10s\": dial tcp 86.109.11.45:6443: connect: connection refused" interval="800ms" Jan 13 21:38:47.428141 containerd[1791]: time="2025-01-13T21:38:47.427986975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-ed112912ac,Uid:a1f5da8232806caa51a8ab6b1e1c144d,Namespace:kube-system,Attempt:0,}" Jan 13 21:38:47.439316 containerd[1791]: time="2025-01-13T21:38:47.439189745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-ed112912ac,Uid:ec93f03f291329e89a0e4f27d7b53c26,Namespace:kube-system,Attempt:0,}" Jan 13 21:38:47.443878 containerd[1791]: time="2025-01-13T21:38:47.443838940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-ed112912ac,Uid:9e2504d94cc43dd2957a51a1bf937525,Namespace:kube-system,Attempt:0,}" Jan 13 21:38:47.487436 kubelet[2840]: I0113 21:38:47.487362 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.488183 kubelet[2840]: E0113 21:38:47.488102 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://86.109.11.45:6443/api/v1/nodes\": dial tcp 86.109.11.45:6443: connect: connection refused" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:47.954591 kubelet[2840]: W0113 21:38:47.954528 2840 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://86.109.11.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ed112912ac&limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:47.954591 kubelet[2840]: E0113 21:38:47.954564 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://86.109.11.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-ed112912ac&limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:47.967717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119345686.mount: Deactivated successfully. Jan 13 21:38:47.969519 containerd[1791]: time="2025-01-13T21:38:47.969485977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:38:47.969702 containerd[1791]: time="2025-01-13T21:38:47.969659468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:38:47.970987 containerd[1791]: time="2025-01-13T21:38:47.970950190Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:38:47.971932 containerd[1791]: time="2025-01-13T21:38:47.971894088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:38:47.972069 containerd[1791]: time="2025-01-13T21:38:47.971991283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:38:47.972468 containerd[1791]: time="2025-01-13T21:38:47.972427220Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:38:47.972640 containerd[1791]: time="2025-01-13T21:38:47.972595315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:38:47.974118 containerd[1791]: time="2025-01-13T21:38:47.974049021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.647516ms" Jan 13 21:38:47.974386 containerd[1791]: time="2025-01-13T21:38:47.974332717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:38:47.975370 containerd[1791]: time="2025-01-13T21:38:47.975330151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.108929ms" Jan 13 21:38:47.976946 containerd[1791]: time="2025-01-13T21:38:47.976905798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.020563ms" Jan 13 21:38:48.059519 containerd[1791]: time="2025-01-13T21:38:48.059451920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:38:48.059609 containerd[1791]: time="2025-01-13T21:38:48.059544687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:38:48.059609 containerd[1791]: time="2025-01-13T21:38:48.059570068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:38:48.059609 containerd[1791]: time="2025-01-13T21:38:48.059577651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:38:48.059660 containerd[1791]: time="2025-01-13T21:38:48.059624992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:38:48.059696 containerd[1791]: time="2025-01-13T21:38:48.059680604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:38:48.059714 containerd[1791]: time="2025-01-13T21:38:48.059676446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:38:48.059714 containerd[1791]: time="2025-01-13T21:38:48.059695445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:38:48.059746 containerd[1791]: time="2025-01-13T21:38:48.059707225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:38:48.059746 containerd[1791]: time="2025-01-13T21:38:48.059719585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:38:48.059781 containerd[1791]: time="2025-01-13T21:38:48.059744916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:38:48.059781 containerd[1791]: time="2025-01-13T21:38:48.059769169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:38:48.078307 systemd[1]: Started cri-containerd-4ff622b1ec31420a051eb5e82fba90f20c9fb7ae7a63fc8f539fcd32d96032d6.scope - libcontainer container 4ff622b1ec31420a051eb5e82fba90f20c9fb7ae7a63fc8f539fcd32d96032d6. Jan 13 21:38:48.079163 systemd[1]: Started cri-containerd-9c3d978bb5dd9ec0989ae3b4865c0306975e8681c0e04e5c4e911c28926adacf.scope - libcontainer container 9c3d978bb5dd9ec0989ae3b4865c0306975e8681c0e04e5c4e911c28926adacf. Jan 13 21:38:48.079798 systemd[1]: Started cri-containerd-bb6afa2617d1179662c638e758cebd137b863503fc31f17c76b0876ec009217b.scope - libcontainer container bb6afa2617d1179662c638e758cebd137b863503fc31f17c76b0876ec009217b. Jan 13 21:38:48.101315 containerd[1791]: time="2025-01-13T21:38:48.101276240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-ed112912ac,Uid:ec93f03f291329e89a0e4f27d7b53c26,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ff622b1ec31420a051eb5e82fba90f20c9fb7ae7a63fc8f539fcd32d96032d6\"" Jan 13 21:38:48.101400 containerd[1791]: time="2025-01-13T21:38:48.101386845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-ed112912ac,Uid:a1f5da8232806caa51a8ab6b1e1c144d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c3d978bb5dd9ec0989ae3b4865c0306975e8681c0e04e5c4e911c28926adacf\"" Jan 13 21:38:48.102241 containerd[1791]: time="2025-01-13T21:38:48.102209440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-ed112912ac,Uid:9e2504d94cc43dd2957a51a1bf937525,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb6afa2617d1179662c638e758cebd137b863503fc31f17c76b0876ec009217b\"" Jan 13 21:38:48.104511 containerd[1791]: time="2025-01-13T21:38:48.104375941Z" level=info msg="CreateContainer within sandbox \"4ff622b1ec31420a051eb5e82fba90f20c9fb7ae7a63fc8f539fcd32d96032d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:38:48.104723 containerd[1791]: time="2025-01-13T21:38:48.104712591Z" level=info msg="CreateContainer within sandbox \"bb6afa2617d1179662c638e758cebd137b863503fc31f17c76b0876ec009217b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:38:48.114742 containerd[1791]: time="2025-01-13T21:38:48.114726260Z" level=info msg="CreateContainer within sandbox \"9c3d978bb5dd9ec0989ae3b4865c0306975e8681c0e04e5c4e911c28926adacf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:38:48.120394 containerd[1791]: time="2025-01-13T21:38:48.120379520Z" level=info msg="CreateContainer within sandbox \"4ff622b1ec31420a051eb5e82fba90f20c9fb7ae7a63fc8f539fcd32d96032d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57d19c155b2b36aba78e5412d9e0410642255700b0a3b10cc0834af6ab296a6e\"" Jan 13 21:38:48.120687 containerd[1791]: time="2025-01-13T21:38:48.120675920Z" level=info msg="StartContainer for \"57d19c155b2b36aba78e5412d9e0410642255700b0a3b10cc0834af6ab296a6e\"" Jan 13 21:38:48.121544 containerd[1791]: time="2025-01-13T21:38:48.121529484Z" level=info msg="CreateContainer within sandbox \"bb6afa2617d1179662c638e758cebd137b863503fc31f17c76b0876ec009217b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"278aa2979669341a5987e948299b1bf8c41841c00b9245cc28da4073b1502e06\"" Jan 13 21:38:48.121725 containerd[1791]: time="2025-01-13T21:38:48.121709924Z" level=info msg="StartContainer for \"278aa2979669341a5987e948299b1bf8c41841c00b9245cc28da4073b1502e06\"" Jan 13 21:38:48.123213 containerd[1791]: time="2025-01-13T21:38:48.123197469Z" level=info msg="CreateContainer within sandbox \"9c3d978bb5dd9ec0989ae3b4865c0306975e8681c0e04e5c4e911c28926adacf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a1372c93f5c4d617561202e03adc3c041fcde77f7c2470ccc3303bfec545048\"" Jan 13 21:38:48.123366 containerd[1791]: time="2025-01-13T21:38:48.123354208Z" level=info msg="StartContainer for \"8a1372c93f5c4d617561202e03adc3c041fcde77f7c2470ccc3303bfec545048\"" Jan 13 21:38:48.135360 kubelet[2840]: W0113 21:38:48.135298 2840 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://86.109.11.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:48.135360 kubelet[2840]: E0113 21:38:48.135335 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://86.109.11.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:48.147315 systemd[1]: Started cri-containerd-278aa2979669341a5987e948299b1bf8c41841c00b9245cc28da4073b1502e06.scope - libcontainer container 278aa2979669341a5987e948299b1bf8c41841c00b9245cc28da4073b1502e06. Jan 13 21:38:48.147961 systemd[1]: Started cri-containerd-57d19c155b2b36aba78e5412d9e0410642255700b0a3b10cc0834af6ab296a6e.scope - libcontainer container 57d19c155b2b36aba78e5412d9e0410642255700b0a3b10cc0834af6ab296a6e. Jan 13 21:38:48.149786 systemd[1]: Started cri-containerd-8a1372c93f5c4d617561202e03adc3c041fcde77f7c2470ccc3303bfec545048.scope - libcontainer container 8a1372c93f5c4d617561202e03adc3c041fcde77f7c2470ccc3303bfec545048. Jan 13 21:38:48.172814 containerd[1791]: time="2025-01-13T21:38:48.172785846Z" level=info msg="StartContainer for \"57d19c155b2b36aba78e5412d9e0410642255700b0a3b10cc0834af6ab296a6e\" returns successfully" Jan 13 21:38:48.172912 containerd[1791]: time="2025-01-13T21:38:48.172848628Z" level=info msg="StartContainer for \"278aa2979669341a5987e948299b1bf8c41841c00b9245cc28da4073b1502e06\" returns successfully" Jan 13 21:38:48.173606 containerd[1791]: time="2025-01-13T21:38:48.173592295Z" level=info msg="StartContainer for \"8a1372c93f5c4d617561202e03adc3c041fcde77f7c2470ccc3303bfec545048\" returns successfully" Jan 13 21:38:48.182224 kubelet[2840]: E0113 21:38:48.182203 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-ed112912ac?timeout=10s\": dial tcp 86.109.11.45:6443: connect: connection refused" interval="1.6s" Jan 13 21:38:48.186551 kubelet[2840]: W0113 21:38:48.186505 2840 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://86.109.11.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:48.186551 kubelet[2840]: E0113 21:38:48.186552 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://86.109.11.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.45:6443: connect: connection refused Jan 13 21:38:48.289572 kubelet[2840]: I0113 21:38:48.289489 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:48.741964 kubelet[2840]: I0113 21:38:48.741916 2840 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:48.746492 kubelet[2840]: E0113 21:38:48.746479 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:48.847444 kubelet[2840]: E0113 21:38:48.847401 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:48.947553 kubelet[2840]: E0113 21:38:48.947502 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.048644 kubelet[2840]: E0113 21:38:49.048443 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.149128 kubelet[2840]: E0113 21:38:49.148946 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.250288 kubelet[2840]: E0113 21:38:49.250184 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.351049 kubelet[2840]: E0113 21:38:49.350786 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.452105 kubelet[2840]: E0113 21:38:49.451979 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.552259 kubelet[2840]: E0113 21:38:49.552151 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.653454 kubelet[2840]: E0113 21:38:49.653239 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.754342 kubelet[2840]: E0113 21:38:49.754228 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.855400 kubelet[2840]: E0113 21:38:49.855301 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:49.955511 kubelet[2840]: E0113 21:38:49.955423 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.056492 kubelet[2840]: E0113 21:38:50.056412 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.157399 kubelet[2840]: E0113 21:38:50.157332 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.258044 kubelet[2840]: E0113 21:38:50.257840 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.358543 kubelet[2840]: E0113 21:38:50.358449 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.459486 kubelet[2840]: E0113 21:38:50.459426 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.560790 kubelet[2840]: E0113 21:38:50.560600 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.661776 kubelet[2840]: E0113 21:38:50.661666 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.762391 kubelet[2840]: E0113 21:38:50.762270 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:50.863557 kubelet[2840]: E0113 21:38:50.863323 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:51.643853 systemd[1]: Reloading requested from client PID 3160 ('systemctl') (unit session-11.scope)... Jan 13 21:38:51.643859 systemd[1]: Reloading... Jan 13 21:38:51.678091 zram_generator::config[3199]: No configuration found. Jan 13 21:38:51.744065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:38:51.774110 kubelet[2840]: I0113 21:38:51.774066 2840 apiserver.go:52] "Watching apiserver" Jan 13 21:38:51.776378 kubelet[2840]: I0113 21:38:51.776368 2840 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:38:51.811441 systemd[1]: Reloading finished in 167 ms. Jan 13 21:38:51.877138 kubelet[2840]: I0113 21:38:51.876937 2840 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:38:51.877407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:38:51.895431 systemd[1]: Started sshd@11-86.109.11.45:22-218.92.0.206:5474.service - OpenSSH per-connection server daemon (218.92.0.206:5474). Jan 13 21:38:51.895868 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:38:51.895986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:51.897213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:38:52.038602 sshd[3255]: Unable to negotiate with 218.92.0.206 port 5474: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Jan 13 21:38:52.039496 systemd[1]: sshd@11-86.109.11.45:22-218.92.0.206:5474.service: Deactivated successfully. Jan 13 21:38:52.095582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:38:52.098196 (kubelet)[3268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:38:52.123315 kubelet[3268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:38:52.123315 kubelet[3268]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:38:52.123315 kubelet[3268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:38:52.123572 kubelet[3268]: I0113 21:38:52.123320 3268 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:38:52.126228 kubelet[3268]: I0113 21:38:52.126187 3268 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:38:52.126228 kubelet[3268]: I0113 21:38:52.126201 3268 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:38:52.126364 kubelet[3268]: I0113 21:38:52.126325 3268 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:38:52.127385 kubelet[3268]: I0113 21:38:52.127347 3268 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:38:52.129330 kubelet[3268]: I0113 21:38:52.129309 3268 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:38:52.139810 kubelet[3268]: I0113 21:38:52.139769 3268 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:38:52.139924 kubelet[3268]: I0113 21:38:52.139915 3268 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:38:52.140089 kubelet[3268]: I0113 21:38:52.140048 3268 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:38:52.140089 kubelet[3268]: I0113 21:38:52.140068 3268 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:38:52.140089 kubelet[3268]: I0113 21:38:52.140076 3268 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:38:52.140214 kubelet[3268]: I0113 21:38:52.140097 3268 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:38:52.140214 kubelet[3268]: I0113 21:38:52.140158 3268 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:38:52.140214 kubelet[3268]: I0113 21:38:52.140168 3268 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:38:52.140214 kubelet[3268]: I0113 21:38:52.140184 3268 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:38:52.140214 kubelet[3268]: I0113 21:38:52.140194 3268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:38:52.140664 kubelet[3268]: I0113 21:38:52.140637 3268 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:38:52.140828 kubelet[3268]: I0113 21:38:52.140817 3268 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:38:52.141245 kubelet[3268]: I0113 21:38:52.141233 3268 server.go:1256] "Started kubelet" Jan 13 21:38:52.141568 kubelet[3268]: I0113 21:38:52.141531 3268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:38:52.141629 kubelet[3268]: I0113 21:38:52.141607 3268 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:38:52.142022 kubelet[3268]: I0113 21:38:52.141906 3268 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:38:52.143315 kubelet[3268]: I0113 21:38:52.143301 3268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:38:52.143387 kubelet[3268]: I0113 21:38:52.143341 3268 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:38:52.143437 kubelet[3268]: E0113 21:38:52.143394 3268 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-ed112912ac\" not found" Jan 13 21:38:52.143437 kubelet[3268]: I0113 21:38:52.143415 3268 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:38:52.143437 kubelet[3268]: E0113 21:38:52.143432 3268 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:38:52.143555 kubelet[3268]: I0113 21:38:52.143537 3268 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:38:52.143596 kubelet[3268]: I0113 21:38:52.143561 3268 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:38:52.144585 kubelet[3268]: I0113 21:38:52.144572 3268 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:38:52.144585 kubelet[3268]: I0113 21:38:52.144585 3268 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:38:52.144693 kubelet[3268]: I0113 21:38:52.144651 3268 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:38:52.150098 kubelet[3268]: I0113 21:38:52.150032 3268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:38:52.150809 kubelet[3268]: I0113 21:38:52.150791 3268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:38:52.150868 kubelet[3268]: I0113 21:38:52.150814 3268 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:38:52.150868 kubelet[3268]: I0113 21:38:52.150830 3268 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:38:52.150932 kubelet[3268]: E0113 21:38:52.150883 3268 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:38:52.164798 kubelet[3268]: I0113 21:38:52.164749 3268 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:38:52.164798 kubelet[3268]: I0113 21:38:52.164772 3268 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:38:52.164798 kubelet[3268]: I0113 21:38:52.164783 3268 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:38:52.164924 kubelet[3268]: I0113 21:38:52.164899 3268 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:38:52.164924 kubelet[3268]: I0113 21:38:52.164916 3268 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:38:52.164924 kubelet[3268]: I0113 21:38:52.164921 3268 policy_none.go:49] "None policy: Start" Jan 13 21:38:52.165239 kubelet[3268]: I0113 21:38:52.165204 3268 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:38:52.165239 kubelet[3268]: I0113 21:38:52.165219 3268 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:38:52.165374 kubelet[3268]: I0113 21:38:52.165333 3268 state_mem.go:75] "Updated machine memory state" Jan 13 21:38:52.168136 kubelet[3268]: I0113 21:38:52.168095 3268 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:38:52.168285 kubelet[3268]: I0113 21:38:52.168273 3268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:38:52.245231 kubelet[3268]: I0113 21:38:52.245214 3268 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.248806 kubelet[3268]: I0113 21:38:52.248759 3268 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.248847 kubelet[3268]: I0113 21:38:52.248811 3268 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.251248 kubelet[3268]: I0113 21:38:52.251209 3268 topology_manager.go:215] "Topology Admit Handler" podUID="ec93f03f291329e89a0e4f27d7b53c26" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.251248 kubelet[3268]: I0113 21:38:52.251249 3268 topology_manager.go:215] "Topology Admit Handler" podUID="9e2504d94cc43dd2957a51a1bf937525" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.251299 kubelet[3268]: I0113 21:38:52.251275 3268 topology_manager.go:215] "Topology Admit Handler" podUID="a1f5da8232806caa51a8ab6b1e1c144d" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.254476 kubelet[3268]: W0113 21:38:52.254464 3268 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:38:52.254893 kubelet[3268]: W0113 21:38:52.254884 3268 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:38:52.254933 kubelet[3268]: W0113 21:38:52.254913 3268 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:38:52.344238 kubelet[3268]: I0113 21:38:52.344191 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e2504d94cc43dd2957a51a1bf937525-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ed112912ac\" (UID: \"9e2504d94cc43dd2957a51a1bf937525\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445097 kubelet[3268]: I0113 21:38:52.445028 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445097 kubelet[3268]: I0113 21:38:52.445078 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445097 kubelet[3268]: I0113 21:38:52.445096 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445244 kubelet[3268]: I0113 21:38:52.445113 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445244 kubelet[3268]: I0113 21:38:52.445132 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec93f03f291329e89a0e4f27d7b53c26-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-ed112912ac\" (UID: \"ec93f03f291329e89a0e4f27d7b53c26\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445244 kubelet[3268]: I0113 21:38:52.445157 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e2504d94cc43dd2957a51a1bf937525-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-ed112912ac\" (UID: \"9e2504d94cc43dd2957a51a1bf937525\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445244 kubelet[3268]: I0113 21:38:52.445184 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e2504d94cc43dd2957a51a1bf937525-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-ed112912ac\" (UID: \"9e2504d94cc43dd2957a51a1bf937525\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:52.445244 kubelet[3268]: I0113 21:38:52.445205 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1f5da8232806caa51a8ab6b1e1c144d-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" (UID: \"a1f5da8232806caa51a8ab6b1e1c144d\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:53.142076 kubelet[3268]: I0113 21:38:53.141995 3268 apiserver.go:52] "Watching apiserver" Jan 13 21:38:53.143551 kubelet[3268]: I0113 21:38:53.143509 3268 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:38:53.157323 kubelet[3268]: W0113 21:38:53.157279 3268 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:38:53.157323 kubelet[3268]: E0113 21:38:53.157321 3268 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.0-a-ed112912ac\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" Jan 13 21:38:53.168570 kubelet[3268]: I0113 21:38:53.168540 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.0-a-ed112912ac" podStartSLOduration=1.168495773 podStartE2EDuration="1.168495773s" podCreationTimestamp="2025-01-13 21:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:38:53.16469319 +0000 UTC m=+1.064392756" watchObservedRunningTime="2025-01-13 21:38:53.168495773 +0000 UTC m=+1.068195339" Jan 13 21:38:53.168661 kubelet[3268]: I0113 21:38:53.168611 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.0-a-ed112912ac" podStartSLOduration=1.168598348 podStartE2EDuration="1.168598348s" podCreationTimestamp="2025-01-13 21:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:38:53.168494159 +0000 UTC m=+1.068193724" watchObservedRunningTime="2025-01-13 21:38:53.168598348 +0000 UTC m=+1.068297910" Jan 13 21:38:53.172325 kubelet[3268]: I0113 21:38:53.172310 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-ed112912ac" podStartSLOduration=1.172290843 podStartE2EDuration="1.172290843s" podCreationTimestamp="2025-01-13 21:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:38:53.172182109 +0000 UTC m=+1.071881678" watchObservedRunningTime="2025-01-13 21:38:53.172290843 +0000 UTC m=+1.071990408" Jan 13 21:38:55.796614 sudo[2069]: pam_unix(sudo:session): session closed for user root Jan 13 21:38:55.797317 sshd[2068]: Connection closed by 147.75.109.163 port 37462 Jan 13 21:38:55.797472 sshd-session[2066]: pam_unix(sshd:session): session closed for user core Jan 13 21:38:55.798859 systemd[1]: sshd@8-86.109.11.45:22-147.75.109.163:37462.service: Deactivated successfully. Jan 13 21:38:55.799711 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:38:55.799819 systemd[1]: session-11.scope: Consumed 3.314s CPU time, 196.9M memory peak, 0B memory swap peak. Jan 13 21:38:55.800395 systemd-logind[1781]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:38:55.800912 systemd-logind[1781]: Removed session 11. Jan 13 21:39:05.094630 update_engine[1786]: I20250113 21:39:05.094501 1786 update_attempter.cc:509] Updating boot flags... Jan 13 21:39:05.134016 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (3437) Jan 13 21:39:05.160050 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (3433) Jan 13 21:39:05.834868 kubelet[3268]: I0113 21:39:05.834801 3268 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:39:05.835816 containerd[1791]: time="2025-01-13T21:39:05.835586166Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:39:05.836478 kubelet[3268]: I0113 21:39:05.836158 3268 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:39:06.501429 kubelet[3268]: I0113 21:39:06.501354 3268 topology_manager.go:215] "Topology Admit Handler" podUID="6dce83e7-53a2-4a62-9397-23a409aacb94" podNamespace="kube-system" podName="kube-proxy-v9psh" Jan 13 21:39:06.516912 systemd[1]: Created slice kubepods-besteffort-pod6dce83e7_53a2_4a62_9397_23a409aacb94.slice - libcontainer container kubepods-besteffort-pod6dce83e7_53a2_4a62_9397_23a409aacb94.slice. Jan 13 21:39:06.546672 kubelet[3268]: I0113 21:39:06.546610 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6dce83e7-53a2-4a62-9397-23a409aacb94-kube-proxy\") pod \"kube-proxy-v9psh\" (UID: \"6dce83e7-53a2-4a62-9397-23a409aacb94\") " pod="kube-system/kube-proxy-v9psh" Jan 13 21:39:06.547209 kubelet[3268]: I0113 21:39:06.547118 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjr7p\" (UniqueName: \"kubernetes.io/projected/6dce83e7-53a2-4a62-9397-23a409aacb94-kube-api-access-mjr7p\") pod \"kube-proxy-v9psh\" (UID: \"6dce83e7-53a2-4a62-9397-23a409aacb94\") " pod="kube-system/kube-proxy-v9psh" Jan 13 21:39:06.547668 kubelet[3268]: I0113 21:39:06.547477 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dce83e7-53a2-4a62-9397-23a409aacb94-xtables-lock\") pod \"kube-proxy-v9psh\" (UID: \"6dce83e7-53a2-4a62-9397-23a409aacb94\") " pod="kube-system/kube-proxy-v9psh" Jan 13 21:39:06.547668 kubelet[3268]: I0113 21:39:06.547632 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dce83e7-53a2-4a62-9397-23a409aacb94-lib-modules\") pod \"kube-proxy-v9psh\" (UID: \"6dce83e7-53a2-4a62-9397-23a409aacb94\") " pod="kube-system/kube-proxy-v9psh" Jan 13 21:39:06.842467 containerd[1791]: time="2025-01-13T21:39:06.842233227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9psh,Uid:6dce83e7-53a2-4a62-9397-23a409aacb94,Namespace:kube-system,Attempt:0,}" Jan 13 21:39:06.853320 containerd[1791]: time="2025-01-13T21:39:06.853251166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:06.853320 containerd[1791]: time="2025-01-13T21:39:06.853279936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:06.853320 containerd[1791]: time="2025-01-13T21:39:06.853286516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:06.853486 containerd[1791]: time="2025-01-13T21:39:06.853323902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:06.870312 systemd[1]: Started cri-containerd-4103af6be26bc50c1e7aa086af36898e56aaffa196481baee9ee5b380551bae8.scope - libcontainer container 4103af6be26bc50c1e7aa086af36898e56aaffa196481baee9ee5b380551bae8. Jan 13 21:39:06.881437 containerd[1791]: time="2025-01-13T21:39:06.881409663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9psh,Uid:6dce83e7-53a2-4a62-9397-23a409aacb94,Namespace:kube-system,Attempt:0,} returns sandbox id \"4103af6be26bc50c1e7aa086af36898e56aaffa196481baee9ee5b380551bae8\"" Jan 13 21:39:06.882903 containerd[1791]: time="2025-01-13T21:39:06.882884300Z" level=info msg="CreateContainer within sandbox \"4103af6be26bc50c1e7aa086af36898e56aaffa196481baee9ee5b380551bae8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:39:06.888600 containerd[1791]: time="2025-01-13T21:39:06.888583443Z" level=info msg="CreateContainer within sandbox \"4103af6be26bc50c1e7aa086af36898e56aaffa196481baee9ee5b380551bae8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a2961cbcf8ce620bd36913750d746517a3687ff4999f80fbce1834fcb99dcf8\"" Jan 13 21:39:06.888865 containerd[1791]: time="2025-01-13T21:39:06.888850901Z" level=info msg="StartContainer for \"8a2961cbcf8ce620bd36913750d746517a3687ff4999f80fbce1834fcb99dcf8\"" Jan 13 21:39:06.915258 systemd[1]: Started cri-containerd-8a2961cbcf8ce620bd36913750d746517a3687ff4999f80fbce1834fcb99dcf8.scope - libcontainer container 8a2961cbcf8ce620bd36913750d746517a3687ff4999f80fbce1834fcb99dcf8. Jan 13 21:39:06.939218 containerd[1791]: time="2025-01-13T21:39:06.939182483Z" level=info msg="StartContainer for \"8a2961cbcf8ce620bd36913750d746517a3687ff4999f80fbce1834fcb99dcf8\" returns successfully" Jan 13 21:39:06.942506 kubelet[3268]: I0113 21:39:06.942482 3268 topology_manager.go:215] "Topology Admit Handler" podUID="5260cfd3-0479-4c23-a3b7-8046907a0630" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-qcctm" Jan 13 21:39:06.947771 systemd[1]: Created slice kubepods-besteffort-pod5260cfd3_0479_4c23_a3b7_8046907a0630.slice - libcontainer container kubepods-besteffort-pod5260cfd3_0479_4c23_a3b7_8046907a0630.slice. Jan 13 21:39:06.951484 kubelet[3268]: I0113 21:39:06.951437 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5260cfd3-0479-4c23-a3b7-8046907a0630-var-lib-calico\") pod \"tigera-operator-c7ccbd65-qcctm\" (UID: \"5260cfd3-0479-4c23-a3b7-8046907a0630\") " pod="tigera-operator/tigera-operator-c7ccbd65-qcctm" Jan 13 21:39:06.951484 kubelet[3268]: I0113 21:39:06.951473 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9s2\" (UniqueName: \"kubernetes.io/projected/5260cfd3-0479-4c23-a3b7-8046907a0630-kube-api-access-5s9s2\") pod \"tigera-operator-c7ccbd65-qcctm\" (UID: \"5260cfd3-0479-4c23-a3b7-8046907a0630\") " pod="tigera-operator/tigera-operator-c7ccbd65-qcctm" Jan 13 21:39:07.250287 containerd[1791]: time="2025-01-13T21:39:07.250258617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-qcctm,Uid:5260cfd3-0479-4c23-a3b7-8046907a0630,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:39:07.259852 containerd[1791]: time="2025-01-13T21:39:07.259777695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:07.259852 containerd[1791]: time="2025-01-13T21:39:07.259806042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:07.260054 containerd[1791]: time="2025-01-13T21:39:07.259820793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:07.260093 containerd[1791]: time="2025-01-13T21:39:07.260082034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:07.276128 systemd[1]: Started cri-containerd-0d05645fe44656558256c1854c5f9211c5a180c6a665200696500ed4ef2d5079.scope - libcontainer container 0d05645fe44656558256c1854c5f9211c5a180c6a665200696500ed4ef2d5079. Jan 13 21:39:07.297943 containerd[1791]: time="2025-01-13T21:39:07.297906320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-qcctm,Uid:5260cfd3-0479-4c23-a3b7-8046907a0630,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0d05645fe44656558256c1854c5f9211c5a180c6a665200696500ed4ef2d5079\"" Jan 13 21:39:07.298668 containerd[1791]: time="2025-01-13T21:39:07.298617084Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:39:07.671883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175449075.mount: Deactivated successfully. Jan 13 21:39:12.333526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333888241.mount: Deactivated successfully. Jan 13 21:39:12.528230 containerd[1791]: time="2025-01-13T21:39:12.528176266Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:12.528452 containerd[1791]: time="2025-01-13T21:39:12.528371424Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763705" Jan 13 21:39:12.528670 containerd[1791]: time="2025-01-13T21:39:12.528654277Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:12.530231 containerd[1791]: time="2025-01-13T21:39:12.530215405Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:12.530625 containerd[1791]: time="2025-01-13T21:39:12.530602217Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.231967765s" Jan 13 21:39:12.530625 containerd[1791]: time="2025-01-13T21:39:12.530616710Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:39:12.531367 containerd[1791]: time="2025-01-13T21:39:12.531355662Z" level=info msg="CreateContainer within sandbox \"0d05645fe44656558256c1854c5f9211c5a180c6a665200696500ed4ef2d5079\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:39:12.535155 containerd[1791]: time="2025-01-13T21:39:12.535140358Z" level=info msg="CreateContainer within sandbox \"0d05645fe44656558256c1854c5f9211c5a180c6a665200696500ed4ef2d5079\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a52eda0778ed38cc87b54f72114bb8f05556a09cebe69e5d79c910f8fcca0352\"" Jan 13 21:39:12.535461 containerd[1791]: time="2025-01-13T21:39:12.535401656Z" level=info msg="StartContainer for \"a52eda0778ed38cc87b54f72114bb8f05556a09cebe69e5d79c910f8fcca0352\"" Jan 13 21:39:12.567274 systemd[1]: Started cri-containerd-a52eda0778ed38cc87b54f72114bb8f05556a09cebe69e5d79c910f8fcca0352.scope - libcontainer container a52eda0778ed38cc87b54f72114bb8f05556a09cebe69e5d79c910f8fcca0352. Jan 13 21:39:12.587045 containerd[1791]: time="2025-01-13T21:39:12.586976706Z" level=info msg="StartContainer for \"a52eda0778ed38cc87b54f72114bb8f05556a09cebe69e5d79c910f8fcca0352\" returns successfully" Jan 13 21:39:13.224049 kubelet[3268]: I0113 21:39:13.223963 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v9psh" podStartSLOduration=7.223873441 podStartE2EDuration="7.223873441s" podCreationTimestamp="2025-01-13 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:39:07.210280862 +0000 UTC m=+15.109980493" watchObservedRunningTime="2025-01-13 21:39:13.223873441 +0000 UTC m=+21.123573058" Jan 13 21:39:13.225030 kubelet[3268]: I0113 21:39:13.224267 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-qcctm" podStartSLOduration=1.9918943059999998 podStartE2EDuration="7.224201778s" podCreationTimestamp="2025-01-13 21:39:06 +0000 UTC" firstStartedPulling="2025-01-13 21:39:07.298430638 +0000 UTC m=+15.198130204" lastFinishedPulling="2025-01-13 21:39:12.53073811 +0000 UTC m=+20.430437676" observedRunningTime="2025-01-13 21:39:13.223793896 +0000 UTC m=+21.123493614" watchObservedRunningTime="2025-01-13 21:39:13.224201778 +0000 UTC m=+21.123901393" Jan 13 21:39:15.414602 kubelet[3268]: I0113 21:39:15.414575 3268 topology_manager.go:215] "Topology Admit Handler" podUID="2462481b-0cf4-4145-b939-5ed8d1a1f80f" podNamespace="calico-system" podName="calico-typha-f66f6f8b7-w4flk" Jan 13 21:39:15.420664 systemd[1]: Created slice kubepods-besteffort-pod2462481b_0cf4_4145_b939_5ed8d1a1f80f.slice - libcontainer container kubepods-besteffort-pod2462481b_0cf4_4145_b939_5ed8d1a1f80f.slice. Jan 13 21:39:15.428015 kubelet[3268]: I0113 21:39:15.427985 3268 topology_manager.go:215] "Topology Admit Handler" podUID="5270e486-7c1d-4128-bdd3-f42306177470" podNamespace="calico-system" podName="calico-node-srrt9" Jan 13 21:39:15.431218 systemd[1]: Created slice kubepods-besteffort-pod5270e486_7c1d_4128_bdd3_f42306177470.slice - libcontainer container kubepods-besteffort-pod5270e486_7c1d_4128_bdd3_f42306177470.slice. Jan 13 21:39:15.568758 kubelet[3268]: I0113 21:39:15.568674 3268 topology_manager.go:215] "Topology Admit Handler" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" podNamespace="calico-system" podName="csi-node-driver-nhvsz" Jan 13 21:39:15.569385 kubelet[3268]: E0113 21:39:15.569306 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:15.610624 kubelet[3268]: I0113 21:39:15.610562 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-cni-net-dir\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.610801 kubelet[3268]: I0113 21:39:15.610646 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2462481b-0cf4-4145-b939-5ed8d1a1f80f-tigera-ca-bundle\") pod \"calico-typha-f66f6f8b7-w4flk\" (UID: \"2462481b-0cf4-4145-b939-5ed8d1a1f80f\") " pod="calico-system/calico-typha-f66f6f8b7-w4flk" Jan 13 21:39:15.610801 kubelet[3268]: I0113 21:39:15.610681 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5270e486-7c1d-4128-bdd3-f42306177470-node-certs\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.610801 kubelet[3268]: I0113 21:39:15.610717 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-flexvol-driver-host\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.610801 kubelet[3268]: I0113 21:39:15.610758 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-xtables-lock\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.610949 kubelet[3268]: I0113 21:39:15.610812 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-var-lib-calico\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.610949 kubelet[3268]: I0113 21:39:15.610844 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlqxq\" (UniqueName: \"kubernetes.io/projected/5270e486-7c1d-4128-bdd3-f42306177470-kube-api-access-wlqxq\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.610949 kubelet[3268]: I0113 21:39:15.610871 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2462481b-0cf4-4145-b939-5ed8d1a1f80f-typha-certs\") pod \"calico-typha-f66f6f8b7-w4flk\" (UID: \"2462481b-0cf4-4145-b939-5ed8d1a1f80f\") " pod="calico-system/calico-typha-f66f6f8b7-w4flk" Jan 13 21:39:15.610949 kubelet[3268]: I0113 21:39:15.610915 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-var-run-calico\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.611113 kubelet[3268]: I0113 21:39:15.610956 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-cni-log-dir\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.611113 kubelet[3268]: I0113 21:39:15.611042 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-policysync\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.611113 kubelet[3268]: I0113 21:39:15.611072 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-cni-bin-dir\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.611113 kubelet[3268]: I0113 21:39:15.611099 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnc4b\" (UniqueName: \"kubernetes.io/projected/2462481b-0cf4-4145-b939-5ed8d1a1f80f-kube-api-access-vnc4b\") pod \"calico-typha-f66f6f8b7-w4flk\" (UID: \"2462481b-0cf4-4145-b939-5ed8d1a1f80f\") " pod="calico-system/calico-typha-f66f6f8b7-w4flk" Jan 13 21:39:15.611256 kubelet[3268]: I0113 21:39:15.611161 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5270e486-7c1d-4128-bdd3-f42306177470-lib-modules\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.611256 kubelet[3268]: I0113 21:39:15.611216 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5270e486-7c1d-4128-bdd3-f42306177470-tigera-ca-bundle\") pod \"calico-node-srrt9\" (UID: \"5270e486-7c1d-4128-bdd3-f42306177470\") " pod="calico-system/calico-node-srrt9" Jan 13 21:39:15.712212 kubelet[3268]: I0113 21:39:15.712115 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d53850a4-8d71-4e78-b476-3e30d51488fd-socket-dir\") pod \"csi-node-driver-nhvsz\" (UID: \"d53850a4-8d71-4e78-b476-3e30d51488fd\") " pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:15.712492 kubelet[3268]: I0113 21:39:15.712234 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d53850a4-8d71-4e78-b476-3e30d51488fd-registration-dir\") pod \"csi-node-driver-nhvsz\" (UID: \"d53850a4-8d71-4e78-b476-3e30d51488fd\") " pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:15.712492 kubelet[3268]: I0113 21:39:15.712423 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d53850a4-8d71-4e78-b476-3e30d51488fd-varrun\") pod \"csi-node-driver-nhvsz\" (UID: \"d53850a4-8d71-4e78-b476-3e30d51488fd\") " pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:15.712781 kubelet[3268]: I0113 21:39:15.712690 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw48z\" (UniqueName: \"kubernetes.io/projected/d53850a4-8d71-4e78-b476-3e30d51488fd-kube-api-access-fw48z\") pod \"csi-node-driver-nhvsz\" (UID: \"d53850a4-8d71-4e78-b476-3e30d51488fd\") " pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:15.713747 kubelet[3268]: I0113 21:39:15.713676 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d53850a4-8d71-4e78-b476-3e30d51488fd-kubelet-dir\") pod \"csi-node-driver-nhvsz\" (UID: \"d53850a4-8d71-4e78-b476-3e30d51488fd\") " pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:15.716096 kubelet[3268]: E0113 21:39:15.716001 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.716096 kubelet[3268]: W0113 21:39:15.716090 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.716574 kubelet[3268]: E0113 21:39:15.716173 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.717086 kubelet[3268]: E0113 21:39:15.717039 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.717086 kubelet[3268]: W0113 21:39:15.717077 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.717404 kubelet[3268]: E0113 21:39:15.717145 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.720653 kubelet[3268]: E0113 21:39:15.720572 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.720653 kubelet[3268]: W0113 21:39:15.720612 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.720653 kubelet[3268]: E0113 21:39:15.720660 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.722291 kubelet[3268]: E0113 21:39:15.722202 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.722291 kubelet[3268]: W0113 21:39:15.722253 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.722605 kubelet[3268]: E0113 21:39:15.722315 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.731984 kubelet[3268]: E0113 21:39:15.731898 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.731984 kubelet[3268]: W0113 21:39:15.731940 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.732359 kubelet[3268]: E0113 21:39:15.731997 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.733644 kubelet[3268]: E0113 21:39:15.733564 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.733644 kubelet[3268]: W0113 21:39:15.733612 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.734060 kubelet[3268]: E0113 21:39:15.733678 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.815155 kubelet[3268]: E0113 21:39:15.815076 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.815155 kubelet[3268]: W0113 21:39:15.815113 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.815155 kubelet[3268]: E0113 21:39:15.815159 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.815866 kubelet[3268]: E0113 21:39:15.815784 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.815866 kubelet[3268]: W0113 21:39:15.815820 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.815866 kubelet[3268]: E0113 21:39:15.815865 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.816508 kubelet[3268]: E0113 21:39:15.816432 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.816508 kubelet[3268]: W0113 21:39:15.816475 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.816782 kubelet[3268]: E0113 21:39:15.816524 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.817102 kubelet[3268]: E0113 21:39:15.817000 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.817102 kubelet[3268]: W0113 21:39:15.817051 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.817102 kubelet[3268]: E0113 21:39:15.817089 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.817669 kubelet[3268]: E0113 21:39:15.817590 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.817669 kubelet[3268]: W0113 21:39:15.817623 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.817915 kubelet[3268]: E0113 21:39:15.817750 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.818195 kubelet[3268]: E0113 21:39:15.818122 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.818195 kubelet[3268]: W0113 21:39:15.818146 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.818481 kubelet[3268]: E0113 21:39:15.818263 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.818766 kubelet[3268]: E0113 21:39:15.818677 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.818766 kubelet[3268]: W0113 21:39:15.818713 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.819058 kubelet[3268]: E0113 21:39:15.818835 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.819362 kubelet[3268]: E0113 21:39:15.819272 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.819362 kubelet[3268]: W0113 21:39:15.819306 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.819362 kubelet[3268]: E0113 21:39:15.819355 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.820028 kubelet[3268]: E0113 21:39:15.819973 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.820165 kubelet[3268]: W0113 21:39:15.820035 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.820165 kubelet[3268]: E0113 21:39:15.820086 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.820655 kubelet[3268]: E0113 21:39:15.820622 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.820756 kubelet[3268]: W0113 21:39:15.820656 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.820756 kubelet[3268]: E0113 21:39:15.820719 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.821230 kubelet[3268]: E0113 21:39:15.821183 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.821230 kubelet[3268]: W0113 21:39:15.821214 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.821484 kubelet[3268]: E0113 21:39:15.821334 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.821856 kubelet[3268]: E0113 21:39:15.821772 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.821856 kubelet[3268]: W0113 21:39:15.821807 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.822216 kubelet[3268]: E0113 21:39:15.821899 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.822507 kubelet[3268]: E0113 21:39:15.822431 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.822507 kubelet[3268]: W0113 21:39:15.822463 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.822781 kubelet[3268]: E0113 21:39:15.822570 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.822988 kubelet[3268]: E0113 21:39:15.822952 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.822988 kubelet[3268]: W0113 21:39:15.822981 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.823281 kubelet[3268]: E0113 21:39:15.823114 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.823563 kubelet[3268]: E0113 21:39:15.823482 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.823563 kubelet[3268]: W0113 21:39:15.823519 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.823836 kubelet[3268]: E0113 21:39:15.823684 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.824107 kubelet[3268]: E0113 21:39:15.824022 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.824107 kubelet[3268]: W0113 21:39:15.824061 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.824409 kubelet[3268]: E0113 21:39:15.824151 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.824714 kubelet[3268]: E0113 21:39:15.824630 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.824714 kubelet[3268]: W0113 21:39:15.824667 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.825035 kubelet[3268]: E0113 21:39:15.824797 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.825204 kubelet[3268]: E0113 21:39:15.825169 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.825322 kubelet[3268]: W0113 21:39:15.825204 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.825445 kubelet[3268]: E0113 21:39:15.825332 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.825810 kubelet[3268]: E0113 21:39:15.825732 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.825810 kubelet[3268]: W0113 21:39:15.825765 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.826144 kubelet[3268]: E0113 21:39:15.825896 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.826518 kubelet[3268]: E0113 21:39:15.826434 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.826518 kubelet[3268]: W0113 21:39:15.826469 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.826816 kubelet[3268]: E0113 21:39:15.826581 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.827100 kubelet[3268]: E0113 21:39:15.826999 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.827100 kubelet[3268]: W0113 21:39:15.827052 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.827394 kubelet[3268]: E0113 21:39:15.827155 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.827671 kubelet[3268]: E0113 21:39:15.827594 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.827671 kubelet[3268]: W0113 21:39:15.827627 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.827970 kubelet[3268]: E0113 21:39:15.827737 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.828339 kubelet[3268]: E0113 21:39:15.828253 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.828339 kubelet[3268]: W0113 21:39:15.828287 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.828619 kubelet[3268]: E0113 21:39:15.828380 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.828968 kubelet[3268]: E0113 21:39:15.828890 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.828968 kubelet[3268]: W0113 21:39:15.828925 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.829305 kubelet[3268]: E0113 21:39:15.829001 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.829636 kubelet[3268]: E0113 21:39:15.829560 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.829636 kubelet[3268]: W0113 21:39:15.829593 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.829636 kubelet[3268]: E0113 21:39:15.829645 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:15.840682 kubelet[3268]: E0113 21:39:15.840603 3268 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:39:15.840682 kubelet[3268]: W0113 21:39:15.840641 3268 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:39:15.840682 kubelet[3268]: E0113 21:39:15.840689 3268 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:39:16.024246 containerd[1791]: time="2025-01-13T21:39:16.024035494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f66f6f8b7-w4flk,Uid:2462481b-0cf4-4145-b939-5ed8d1a1f80f,Namespace:calico-system,Attempt:0,}" Jan 13 21:39:16.033477 containerd[1791]: time="2025-01-13T21:39:16.033454157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srrt9,Uid:5270e486-7c1d-4128-bdd3-f42306177470,Namespace:calico-system,Attempt:0,}" Jan 13 21:39:16.035443 containerd[1791]: time="2025-01-13T21:39:16.035368608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:16.035652 containerd[1791]: time="2025-01-13T21:39:16.035636234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:16.035652 containerd[1791]: time="2025-01-13T21:39:16.035647910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:16.035703 containerd[1791]: time="2025-01-13T21:39:16.035693069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:16.042598 containerd[1791]: time="2025-01-13T21:39:16.042556779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:16.042598 containerd[1791]: time="2025-01-13T21:39:16.042590148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:16.042598 containerd[1791]: time="2025-01-13T21:39:16.042597191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:16.042693 containerd[1791]: time="2025-01-13T21:39:16.042638931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:16.061527 systemd[1]: Started cri-containerd-155743149a25249ad5cee8914c59ce72a2ddd7f9e2cffeb4d6d91f6a23c5470f.scope - libcontainer container 155743149a25249ad5cee8914c59ce72a2ddd7f9e2cffeb4d6d91f6a23c5470f. Jan 13 21:39:16.069694 systemd[1]: Started cri-containerd-451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c.scope - libcontainer container 451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c. Jan 13 21:39:16.098248 containerd[1791]: time="2025-01-13T21:39:16.098217824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srrt9,Uid:5270e486-7c1d-4128-bdd3-f42306177470,Namespace:calico-system,Attempt:0,} returns sandbox id \"451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c\"" Jan 13 21:39:16.099149 containerd[1791]: time="2025-01-13T21:39:16.099095938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:39:16.109614 containerd[1791]: time="2025-01-13T21:39:16.109591891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f66f6f8b7-w4flk,Uid:2462481b-0cf4-4145-b939-5ed8d1a1f80f,Namespace:calico-system,Attempt:0,} returns sandbox id \"155743149a25249ad5cee8914c59ce72a2ddd7f9e2cffeb4d6d91f6a23c5470f\"" Jan 13 21:39:17.152438 kubelet[3268]: E0113 21:39:17.152341 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:17.407897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975955007.mount: Deactivated successfully. Jan 13 21:39:17.446719 containerd[1791]: time="2025-01-13T21:39:17.446668893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:17.446914 containerd[1791]: time="2025-01-13T21:39:17.446892572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:39:17.447263 containerd[1791]: time="2025-01-13T21:39:17.447229023Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:17.448169 containerd[1791]: time="2025-01-13T21:39:17.448130901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:17.448552 containerd[1791]: time="2025-01-13T21:39:17.448508214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.349394128s" Jan 13 21:39:17.448552 containerd[1791]: time="2025-01-13T21:39:17.448524098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:39:17.448869 containerd[1791]: time="2025-01-13T21:39:17.448859353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:39:17.449454 containerd[1791]: time="2025-01-13T21:39:17.449439452Z" level=info msg="CreateContainer within sandbox \"451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:39:17.455311 containerd[1791]: time="2025-01-13T21:39:17.455298656Z" level=info msg="CreateContainer within sandbox \"451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354\"" Jan 13 21:39:17.455459 containerd[1791]: time="2025-01-13T21:39:17.455448583Z" level=info msg="StartContainer for \"8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354\"" Jan 13 21:39:17.477143 systemd[1]: Started cri-containerd-8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354.scope - libcontainer container 8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354. Jan 13 21:39:17.494031 containerd[1791]: time="2025-01-13T21:39:17.493970179Z" level=info msg="StartContainer for \"8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354\" returns successfully" Jan 13 21:39:17.502391 systemd[1]: cri-containerd-8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354.scope: Deactivated successfully. Jan 13 21:39:17.771092 containerd[1791]: time="2025-01-13T21:39:17.771054600Z" level=info msg="shim disconnected" id=8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354 namespace=k8s.io Jan 13 21:39:17.771092 containerd[1791]: time="2025-01-13T21:39:17.771088414Z" level=warning msg="cleaning up after shim disconnected" id=8a0411777a2d32d6db652e99c8ae0f6cc2447fc30f8aed9c33df6590ec905354 namespace=k8s.io Jan 13 21:39:17.771092 containerd[1791]: time="2025-01-13T21:39:17.771094456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:39:19.124930 containerd[1791]: time="2025-01-13T21:39:19.124878832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:19.125168 containerd[1791]: time="2025-01-13T21:39:19.125066693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 21:39:19.125489 containerd[1791]: time="2025-01-13T21:39:19.125440694Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:19.126386 containerd[1791]: time="2025-01-13T21:39:19.126345608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:19.126811 containerd[1791]: time="2025-01-13T21:39:19.126771345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.67789841s" Jan 13 21:39:19.126811 containerd[1791]: time="2025-01-13T21:39:19.126786021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:39:19.127133 containerd[1791]: time="2025-01-13T21:39:19.127083677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:39:19.130142 containerd[1791]: time="2025-01-13T21:39:19.130096437Z" level=info msg="CreateContainer within sandbox \"155743149a25249ad5cee8914c59ce72a2ddd7f9e2cffeb4d6d91f6a23c5470f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:39:19.135672 containerd[1791]: time="2025-01-13T21:39:19.135628717Z" level=info msg="CreateContainer within sandbox \"155743149a25249ad5cee8914c59ce72a2ddd7f9e2cffeb4d6d91f6a23c5470f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9794c019b56417388733749fa4d6f79a1ae7a3d9ff2af993635c736f25064baa\"" Jan 13 21:39:19.135817 containerd[1791]: time="2025-01-13T21:39:19.135804659Z" level=info msg="StartContainer for \"9794c019b56417388733749fa4d6f79a1ae7a3d9ff2af993635c736f25064baa\"" Jan 13 21:39:19.151490 kubelet[3268]: E0113 21:39:19.151447 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:19.169220 systemd[1]: Started cri-containerd-9794c019b56417388733749fa4d6f79a1ae7a3d9ff2af993635c736f25064baa.scope - libcontainer container 9794c019b56417388733749fa4d6f79a1ae7a3d9ff2af993635c736f25064baa. Jan 13 21:39:19.199372 containerd[1791]: time="2025-01-13T21:39:19.199346057Z" level=info msg="StartContainer for \"9794c019b56417388733749fa4d6f79a1ae7a3d9ff2af993635c736f25064baa\" returns successfully" Jan 13 21:39:19.238420 kubelet[3268]: I0113 21:39:19.238393 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-f66f6f8b7-w4flk" podStartSLOduration=1.221457769 podStartE2EDuration="4.238361934s" podCreationTimestamp="2025-01-13 21:39:15 +0000 UTC" firstStartedPulling="2025-01-13 21:39:16.110060876 +0000 UTC m=+24.009760441" lastFinishedPulling="2025-01-13 21:39:19.126965042 +0000 UTC m=+27.026664606" observedRunningTime="2025-01-13 21:39:19.238259237 +0000 UTC m=+27.137958808" watchObservedRunningTime="2025-01-13 21:39:19.238361934 +0000 UTC m=+27.138061505" Jan 13 21:39:20.228376 kubelet[3268]: I0113 21:39:20.228286 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:39:21.151708 kubelet[3268]: E0113 21:39:21.151692 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:21.657156 containerd[1791]: time="2025-01-13T21:39:21.657107797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:21.657337 containerd[1791]: time="2025-01-13T21:39:21.657216433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:39:21.657593 containerd[1791]: time="2025-01-13T21:39:21.657551248Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:21.658747 containerd[1791]: time="2025-01-13T21:39:21.658706332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:21.659146 containerd[1791]: time="2025-01-13T21:39:21.659105473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.532003769s" Jan 13 21:39:21.659146 containerd[1791]: time="2025-01-13T21:39:21.659121153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:39:21.659845 containerd[1791]: time="2025-01-13T21:39:21.659802135Z" level=info msg="CreateContainer within sandbox \"451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:39:21.665584 containerd[1791]: time="2025-01-13T21:39:21.665543629Z" level=info msg="CreateContainer within sandbox \"451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e\"" Jan 13 21:39:21.665806 containerd[1791]: time="2025-01-13T21:39:21.665765471Z" level=info msg="StartContainer for \"f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e\"" Jan 13 21:39:21.691350 systemd[1]: Started cri-containerd-f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e.scope - libcontainer container f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e. Jan 13 21:39:21.706677 containerd[1791]: time="2025-01-13T21:39:21.706650414Z" level=info msg="StartContainer for \"f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e\" returns successfully" Jan 13 21:39:22.244215 containerd[1791]: time="2025-01-13T21:39:22.244190347Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:39:22.245033 systemd[1]: cri-containerd-f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e.scope: Deactivated successfully. Jan 13 21:39:22.254615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e-rootfs.mount: Deactivated successfully. Jan 13 21:39:22.274453 kubelet[3268]: I0113 21:39:22.274370 3268 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:39:22.317043 kubelet[3268]: I0113 21:39:22.316931 3268 topology_manager.go:215] "Topology Admit Handler" podUID="2fc98b87-986b-42c4-bb81-8550837ec10b" podNamespace="calico-system" podName="calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:22.318112 kubelet[3268]: I0113 21:39:22.318057 3268 topology_manager.go:215] "Topology Admit Handler" podUID="ec5b66ae-6390-4504-928f-0cd85519c73d" podNamespace="kube-system" podName="coredns-76f75df574-svkr6" Jan 13 21:39:22.318986 kubelet[3268]: I0113 21:39:22.318956 3268 topology_manager.go:215] "Topology Admit Handler" podUID="3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28" podNamespace="kube-system" podName="coredns-76f75df574-dqn4h" Jan 13 21:39:22.319886 kubelet[3268]: I0113 21:39:22.319809 3268 topology_manager.go:215] "Topology Admit Handler" podUID="547ff756-0fe2-4d76-8a18-b23758fbd4a2" podNamespace="calico-apiserver" podName="calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:22.320666 kubelet[3268]: I0113 21:39:22.320618 3268 topology_manager.go:215] "Topology Admit Handler" podUID="dac5f61b-c5c7-4a9f-8108-2532f35a9873" podNamespace="calico-apiserver" podName="calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:22.332501 systemd[1]: Created slice kubepods-besteffort-pod2fc98b87_986b_42c4_bb81_8550837ec10b.slice - libcontainer container kubepods-besteffort-pod2fc98b87_986b_42c4_bb81_8550837ec10b.slice. Jan 13 21:39:22.342966 systemd[1]: Created slice kubepods-burstable-podec5b66ae_6390_4504_928f_0cd85519c73d.slice - libcontainer container kubepods-burstable-podec5b66ae_6390_4504_928f_0cd85519c73d.slice. Jan 13 21:39:22.349592 systemd[1]: Created slice kubepods-burstable-pod3bf9b435_fe1c_48eb_8f0f_0b2ec937fa28.slice - libcontainer container kubepods-burstable-pod3bf9b435_fe1c_48eb_8f0f_0b2ec937fa28.slice. Jan 13 21:39:22.355229 systemd[1]: Created slice kubepods-besteffort-pod547ff756_0fe2_4d76_8a18_b23758fbd4a2.slice - libcontainer container kubepods-besteffort-pod547ff756_0fe2_4d76_8a18_b23758fbd4a2.slice. Jan 13 21:39:22.359297 systemd[1]: Created slice kubepods-besteffort-poddac5f61b_c5c7_4a9f_8108_2532f35a9873.slice - libcontainer container kubepods-besteffort-poddac5f61b_c5c7_4a9f_8108_2532f35a9873.slice. Jan 13 21:39:22.366048 kubelet[3268]: I0113 21:39:22.366028 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnf8n\" (UniqueName: \"kubernetes.io/projected/dac5f61b-c5c7-4a9f-8108-2532f35a9873-kube-api-access-fnf8n\") pod \"calico-apiserver-f4cfd48dc-rmn7p\" (UID: \"dac5f61b-c5c7-4a9f-8108-2532f35a9873\") " pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:22.366162 kubelet[3268]: I0113 21:39:22.366061 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbm7q\" (UniqueName: \"kubernetes.io/projected/2fc98b87-986b-42c4-bb81-8550837ec10b-kube-api-access-jbm7q\") pod \"calico-kube-controllers-7b7795d5b4-82g4w\" (UID: \"2fc98b87-986b-42c4-bb81-8550837ec10b\") " pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:22.366162 kubelet[3268]: I0113 21:39:22.366084 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdzvj\" (UniqueName: \"kubernetes.io/projected/3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28-kube-api-access-kdzvj\") pod \"coredns-76f75df574-dqn4h\" (UID: \"3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28\") " pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:22.366162 kubelet[3268]: I0113 21:39:22.366136 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fc98b87-986b-42c4-bb81-8550837ec10b-tigera-ca-bundle\") pod \"calico-kube-controllers-7b7795d5b4-82g4w\" (UID: \"2fc98b87-986b-42c4-bb81-8550837ec10b\") " pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:22.366238 kubelet[3268]: I0113 21:39:22.366164 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/547ff756-0fe2-4d76-8a18-b23758fbd4a2-calico-apiserver-certs\") pod \"calico-apiserver-f4cfd48dc-t2q7n\" (UID: \"547ff756-0fe2-4d76-8a18-b23758fbd4a2\") " pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:22.366238 kubelet[3268]: I0113 21:39:22.366217 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28-config-volume\") pod \"coredns-76f75df574-dqn4h\" (UID: \"3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28\") " pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:22.366289 kubelet[3268]: I0113 21:39:22.366243 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dac5f61b-c5c7-4a9f-8108-2532f35a9873-calico-apiserver-certs\") pod \"calico-apiserver-f4cfd48dc-rmn7p\" (UID: \"dac5f61b-c5c7-4a9f-8108-2532f35a9873\") " pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:22.366289 kubelet[3268]: I0113 21:39:22.366260 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgs55\" (UniqueName: \"kubernetes.io/projected/ec5b66ae-6390-4504-928f-0cd85519c73d-kube-api-access-jgs55\") pod \"coredns-76f75df574-svkr6\" (UID: \"ec5b66ae-6390-4504-928f-0cd85519c73d\") " pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:22.366337 kubelet[3268]: I0113 21:39:22.366305 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec5b66ae-6390-4504-928f-0cd85519c73d-config-volume\") pod \"coredns-76f75df574-svkr6\" (UID: \"ec5b66ae-6390-4504-928f-0cd85519c73d\") " pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:22.366337 kubelet[3268]: I0113 21:39:22.366330 3268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmk8g\" (UniqueName: \"kubernetes.io/projected/547ff756-0fe2-4d76-8a18-b23758fbd4a2-kube-api-access-nmk8g\") pod \"calico-apiserver-f4cfd48dc-t2q7n\" (UID: \"547ff756-0fe2-4d76-8a18-b23758fbd4a2\") " pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:22.640214 containerd[1791]: time="2025-01-13T21:39:22.640024192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:0,}" Jan 13 21:39:22.648211 containerd[1791]: time="2025-01-13T21:39:22.648104377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:0,}" Jan 13 21:39:22.653346 containerd[1791]: time="2025-01-13T21:39:22.653249720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:0,}" Jan 13 21:39:22.658628 containerd[1791]: time="2025-01-13T21:39:22.658518328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:39:22.663043 containerd[1791]: time="2025-01-13T21:39:22.662896840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:39:22.959698 containerd[1791]: time="2025-01-13T21:39:22.959664421Z" level=info msg="shim disconnected" id=f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e namespace=k8s.io Jan 13 21:39:22.959698 containerd[1791]: time="2025-01-13T21:39:22.959694521Z" level=warning msg="cleaning up after shim disconnected" id=f918d9123fe7f37e424f816cf7b58e845e1571b58f0b695143cdff66da44425e namespace=k8s.io Jan 13 21:39:22.959698 containerd[1791]: time="2025-01-13T21:39:22.959702224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:39:23.009116 containerd[1791]: time="2025-01-13T21:39:23.009078614Z" level=error msg="Failed to destroy network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.009306 containerd[1791]: time="2025-01-13T21:39:23.009293247Z" level=error msg="encountered an error cleaning up failed sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.009345 containerd[1791]: time="2025-01-13T21:39:23.009335130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.009533 kubelet[3268]: E0113 21:39:23.009515 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.009582 kubelet[3268]: E0113 21:39:23.009567 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:23.009605 kubelet[3268]: E0113 21:39:23.009583 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:23.009626 kubelet[3268]: E0113 21:39:23.009619 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-svkr6" podUID="ec5b66ae-6390-4504-928f-0cd85519c73d" Jan 13 21:39:23.017824 containerd[1791]: time="2025-01-13T21:39:23.017754649Z" level=error msg="Failed to destroy network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018001 containerd[1791]: time="2025-01-13T21:39:23.017981048Z" level=error msg="encountered an error cleaning up failed sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018070 containerd[1791]: time="2025-01-13T21:39:23.018036191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018214 kubelet[3268]: E0113 21:39:23.018199 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018255 kubelet[3268]: E0113 21:39:23.018245 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:23.018290 kubelet[3268]: E0113 21:39:23.018270 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:23.018316 containerd[1791]: time="2025-01-13T21:39:23.018303314Z" level=error msg="Failed to destroy network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018337 kubelet[3268]: E0113 21:39:23.018325 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" podUID="2fc98b87-986b-42c4-bb81-8550837ec10b" Jan 13 21:39:23.018453 containerd[1791]: time="2025-01-13T21:39:23.018442087Z" level=error msg="encountered an error cleaning up failed sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018478 containerd[1791]: time="2025-01-13T21:39:23.018462307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018551 kubelet[3268]: E0113 21:39:23.018541 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.018578 kubelet[3268]: E0113 21:39:23.018565 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:23.018578 kubelet[3268]: E0113 21:39:23.018577 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:23.018621 kubelet[3268]: E0113 21:39:23.018603 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dqn4h" podUID="3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28" Jan 13 21:39:23.019115 containerd[1791]: time="2025-01-13T21:39:23.019072441Z" level=error msg="Failed to destroy network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.019249 containerd[1791]: time="2025-01-13T21:39:23.019212837Z" level=error msg="encountered an error cleaning up failed sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.019249 containerd[1791]: time="2025-01-13T21:39:23.019237705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.019348 kubelet[3268]: E0113 21:39:23.019308 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.019348 kubelet[3268]: E0113 21:39:23.019326 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:23.019348 kubelet[3268]: E0113 21:39:23.019337 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:23.019413 kubelet[3268]: E0113 21:39:23.019360 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" podUID="547ff756-0fe2-4d76-8a18-b23758fbd4a2" Jan 13 21:39:23.021568 containerd[1791]: time="2025-01-13T21:39:23.021548748Z" level=error msg="Failed to destroy network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.021701 containerd[1791]: time="2025-01-13T21:39:23.021690238Z" level=error msg="encountered an error cleaning up failed sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.021728 containerd[1791]: time="2025-01-13T21:39:23.021715490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.021830 kubelet[3268]: E0113 21:39:23.021823 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.021856 kubelet[3268]: E0113 21:39:23.021844 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:23.021877 kubelet[3268]: E0113 21:39:23.021858 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:23.021897 kubelet[3268]: E0113 21:39:23.021888 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" podUID="dac5f61b-c5c7-4a9f-8108-2532f35a9873" Jan 13 21:39:23.167019 systemd[1]: Created slice kubepods-besteffort-podd53850a4_8d71_4e78_b476_3e30d51488fd.slice - libcontainer container kubepods-besteffort-podd53850a4_8d71_4e78_b476_3e30d51488fd.slice. Jan 13 21:39:23.172622 containerd[1791]: time="2025-01-13T21:39:23.172541582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:0,}" Jan 13 21:39:23.201776 containerd[1791]: time="2025-01-13T21:39:23.201722831Z" level=error msg="Failed to destroy network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.201916 containerd[1791]: time="2025-01-13T21:39:23.201903325Z" level=error msg="encountered an error cleaning up failed sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.201947 containerd[1791]: time="2025-01-13T21:39:23.201936257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.202085 kubelet[3268]: E0113 21:39:23.202072 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.202116 kubelet[3268]: E0113 21:39:23.202105 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:23.202139 kubelet[3268]: E0113 21:39:23.202119 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:23.202164 kubelet[3268]: E0113 21:39:23.202153 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:23.239490 containerd[1791]: time="2025-01-13T21:39:23.239295740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:39:23.239729 kubelet[3268]: I0113 21:39:23.239461 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808" Jan 13 21:39:23.240435 containerd[1791]: time="2025-01-13T21:39:23.240403018Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:23.240571 containerd[1791]: time="2025-01-13T21:39:23.240557292Z" level=info msg="Ensure that sandbox 13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808 in task-service has been cleanup successfully" Jan 13 21:39:23.240631 kubelet[3268]: I0113 21:39:23.240599 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d" Jan 13 21:39:23.240674 containerd[1791]: time="2025-01-13T21:39:23.240653349Z" level=info msg="TearDown network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" successfully" Jan 13 21:39:23.240674 containerd[1791]: time="2025-01-13T21:39:23.240663216Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" returns successfully" Jan 13 21:39:23.240826 containerd[1791]: time="2025-01-13T21:39:23.240812951Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:23.240872 containerd[1791]: time="2025-01-13T21:39:23.240862355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:1,}" Jan 13 21:39:23.240953 containerd[1791]: time="2025-01-13T21:39:23.240939731Z" level=info msg="Ensure that sandbox bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d in task-service has been cleanup successfully" Jan 13 21:39:23.241047 containerd[1791]: time="2025-01-13T21:39:23.241035794Z" level=info msg="TearDown network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" successfully" Jan 13 21:39:23.241111 containerd[1791]: time="2025-01-13T21:39:23.241048863Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" returns successfully" Jan 13 21:39:23.241138 kubelet[3268]: I0113 21:39:23.241060 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62" Jan 13 21:39:23.241226 containerd[1791]: time="2025-01-13T21:39:23.241213263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:39:23.241278 containerd[1791]: time="2025-01-13T21:39:23.241269678Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:23.241411 containerd[1791]: time="2025-01-13T21:39:23.241392445Z" level=info msg="Ensure that sandbox 2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62 in task-service has been cleanup successfully" Jan 13 21:39:23.241507 containerd[1791]: time="2025-01-13T21:39:23.241495416Z" level=info msg="TearDown network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" successfully" Jan 13 21:39:23.241543 containerd[1791]: time="2025-01-13T21:39:23.241507012Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" returns successfully" Jan 13 21:39:23.241571 kubelet[3268]: I0113 21:39:23.241563 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555" Jan 13 21:39:23.241749 containerd[1791]: time="2025-01-13T21:39:23.241738450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:39:23.241795 containerd[1791]: time="2025-01-13T21:39:23.241785502Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:23.241881 containerd[1791]: time="2025-01-13T21:39:23.241873106Z" level=info msg="Ensure that sandbox 009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555 in task-service has been cleanup successfully" Jan 13 21:39:23.241953 containerd[1791]: time="2025-01-13T21:39:23.241945433Z" level=info msg="TearDown network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" successfully" Jan 13 21:39:23.241975 containerd[1791]: time="2025-01-13T21:39:23.241953435Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" returns successfully" Jan 13 21:39:23.242060 kubelet[3268]: I0113 21:39:23.242052 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f" Jan 13 21:39:23.242154 containerd[1791]: time="2025-01-13T21:39:23.242143695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:1,}" Jan 13 21:39:23.242278 containerd[1791]: time="2025-01-13T21:39:23.242266545Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:23.242410 containerd[1791]: time="2025-01-13T21:39:23.242397604Z" level=info msg="Ensure that sandbox 41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f in task-service has been cleanup successfully" Jan 13 21:39:23.242465 kubelet[3268]: I0113 21:39:23.242458 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946" Jan 13 21:39:23.242499 containerd[1791]: time="2025-01-13T21:39:23.242488931Z" level=info msg="TearDown network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" successfully" Jan 13 21:39:23.242521 containerd[1791]: time="2025-01-13T21:39:23.242500460Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" returns successfully" Jan 13 21:39:23.242683 containerd[1791]: time="2025-01-13T21:39:23.242669079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:1,}" Jan 13 21:39:23.242718 containerd[1791]: time="2025-01-13T21:39:23.242693900Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:23.242813 containerd[1791]: time="2025-01-13T21:39:23.242801628Z" level=info msg="Ensure that sandbox ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946 in task-service has been cleanup successfully" Jan 13 21:39:23.242896 containerd[1791]: time="2025-01-13T21:39:23.242886245Z" level=info msg="TearDown network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" successfully" Jan 13 21:39:23.242935 containerd[1791]: time="2025-01-13T21:39:23.242895664Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" returns successfully" Jan 13 21:39:23.243072 containerd[1791]: time="2025-01-13T21:39:23.243060665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:1,}" Jan 13 21:39:23.283914 containerd[1791]: time="2025-01-13T21:39:23.283881634Z" level=error msg="Failed to destroy network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.284131 containerd[1791]: time="2025-01-13T21:39:23.284112331Z" level=error msg="encountered an error cleaning up failed sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.284185 containerd[1791]: time="2025-01-13T21:39:23.284169830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.284407 kubelet[3268]: E0113 21:39:23.284377 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.284613 kubelet[3268]: E0113 21:39:23.284425 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:23.284613 kubelet[3268]: E0113 21:39:23.284447 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:23.284613 kubelet[3268]: E0113 21:39:23.284499 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" podUID="2fc98b87-986b-42c4-bb81-8550837ec10b" Jan 13 21:39:23.284807 containerd[1791]: time="2025-01-13T21:39:23.284789760Z" level=error msg="Failed to destroy network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.284877 containerd[1791]: time="2025-01-13T21:39:23.284862328Z" level=error msg="Failed to destroy network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.284980 containerd[1791]: time="2025-01-13T21:39:23.284960076Z" level=error msg="Failed to destroy network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285035 containerd[1791]: time="2025-01-13T21:39:23.284965511Z" level=error msg="Failed to destroy network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285035 containerd[1791]: time="2025-01-13T21:39:23.285020973Z" level=error msg="encountered an error cleaning up failed sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285077 containerd[1791]: time="2025-01-13T21:39:23.285050822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285099 containerd[1791]: time="2025-01-13T21:39:23.284962796Z" level=error msg="encountered an error cleaning up failed sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285118 containerd[1791]: time="2025-01-13T21:39:23.285107952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285194 containerd[1791]: time="2025-01-13T21:39:23.285151393Z" level=error msg="encountered an error cleaning up failed sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285194 containerd[1791]: time="2025-01-13T21:39:23.285165303Z" level=error msg="encountered an error cleaning up failed sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285194 containerd[1791]: time="2025-01-13T21:39:23.285177021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285267 kubelet[3268]: E0113 21:39:23.285185 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285267 kubelet[3268]: E0113 21:39:23.285186 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285267 kubelet[3268]: E0113 21:39:23.285213 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:23.285267 kubelet[3268]: E0113 21:39:23.285215 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:23.285365 containerd[1791]: time="2025-01-13T21:39:23.285196443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285385 kubelet[3268]: E0113 21:39:23.285233 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:23.285385 kubelet[3268]: E0113 21:39:23.285233 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:23.285385 kubelet[3268]: E0113 21:39:23.285255 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285438 kubelet[3268]: E0113 21:39:23.285270 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" podUID="dac5f61b-c5c7-4a9f-8108-2532f35a9873" Jan 13 21:39:23.285438 kubelet[3268]: E0113 21:39:23.285270 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-svkr6" podUID="ec5b66ae-6390-4504-928f-0cd85519c73d" Jan 13 21:39:23.285438 kubelet[3268]: E0113 21:39:23.285275 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:23.285513 kubelet[3268]: E0113 21:39:23.285272 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.285513 kubelet[3268]: E0113 21:39:23.285288 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:23.285513 kubelet[3268]: E0113 21:39:23.285294 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:23.285513 kubelet[3268]: E0113 21:39:23.285305 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:23.285580 kubelet[3268]: E0113 21:39:23.285315 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" podUID="547ff756-0fe2-4d76-8a18-b23758fbd4a2" Jan 13 21:39:23.285580 kubelet[3268]: E0113 21:39:23.285323 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:23.287609 containerd[1791]: time="2025-01-13T21:39:23.287593423Z" level=error msg="Failed to destroy network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.287753 containerd[1791]: time="2025-01-13T21:39:23.287741259Z" level=error msg="encountered an error cleaning up failed sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.287777 containerd[1791]: time="2025-01-13T21:39:23.287767232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.287883 kubelet[3268]: E0113 21:39:23.287875 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:23.287911 kubelet[3268]: E0113 21:39:23.287896 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:23.287911 kubelet[3268]: E0113 21:39:23.287909 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:23.287949 kubelet[3268]: E0113 21:39:23.287933 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dqn4h" podUID="3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28" Jan 13 21:39:23.667608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62-shm.mount: Deactivated successfully. Jan 13 21:39:23.667680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d-shm.mount: Deactivated successfully. Jan 13 21:39:23.667713 systemd[1]: run-netns-cni\x2d8d19310c\x2d5d2a\x2d18c9\x2d5af4\x2d1469d08de708.mount: Deactivated successfully. Jan 13 21:39:23.667742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946-shm.mount: Deactivated successfully. Jan 13 21:39:23.667772 systemd[1]: run-netns-cni\x2dbcb6c529\x2db178\x2d2711\x2d9085\x2dfbfb3c032d72.mount: Deactivated successfully. Jan 13 21:39:23.667801 systemd[1]: run-netns-cni\x2da1066df5\x2deded\x2d263c\x2d331d\x2df56d0c00542c.mount: Deactivated successfully. Jan 13 21:39:23.667830 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555-shm.mount: Deactivated successfully. Jan 13 21:39:23.667862 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808-shm.mount: Deactivated successfully. Jan 13 21:39:24.245693 kubelet[3268]: I0113 21:39:24.245675 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4" Jan 13 21:39:24.245931 containerd[1791]: time="2025-01-13T21:39:24.245912878Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" Jan 13 21:39:24.246203 containerd[1791]: time="2025-01-13T21:39:24.246062406Z" level=info msg="Ensure that sandbox ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4 in task-service has been cleanup successfully" Jan 13 21:39:24.246203 containerd[1791]: time="2025-01-13T21:39:24.246178335Z" level=info msg="TearDown network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" successfully" Jan 13 21:39:24.246268 containerd[1791]: time="2025-01-13T21:39:24.246206082Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" returns successfully" Jan 13 21:39:24.246302 kubelet[3268]: I0113 21:39:24.246219 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a" Jan 13 21:39:24.246345 containerd[1791]: time="2025-01-13T21:39:24.246326200Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:24.246411 containerd[1791]: time="2025-01-13T21:39:24.246381257Z" level=info msg="TearDown network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" successfully" Jan 13 21:39:24.246450 containerd[1791]: time="2025-01-13T21:39:24.246410228Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" returns successfully" Jan 13 21:39:24.246450 containerd[1791]: time="2025-01-13T21:39:24.246413113Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" Jan 13 21:39:24.246538 containerd[1791]: time="2025-01-13T21:39:24.246524947Z" level=info msg="Ensure that sandbox 1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a in task-service has been cleanup successfully" Jan 13 21:39:24.246601 containerd[1791]: time="2025-01-13T21:39:24.246590775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:2,}" Jan 13 21:39:24.246638 containerd[1791]: time="2025-01-13T21:39:24.246624924Z" level=info msg="TearDown network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" successfully" Jan 13 21:39:24.246674 containerd[1791]: time="2025-01-13T21:39:24.246638839Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" returns successfully" Jan 13 21:39:24.246741 kubelet[3268]: I0113 21:39:24.246731 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59" Jan 13 21:39:24.246806 containerd[1791]: time="2025-01-13T21:39:24.246792822Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:24.246840 containerd[1791]: time="2025-01-13T21:39:24.246834436Z" level=info msg="TearDown network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" successfully" Jan 13 21:39:24.246865 containerd[1791]: time="2025-01-13T21:39:24.246840659Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" returns successfully" Jan 13 21:39:24.247022 containerd[1791]: time="2025-01-13T21:39:24.247012184Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" Jan 13 21:39:24.247052 containerd[1791]: time="2025-01-13T21:39:24.247028903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:2,}" Jan 13 21:39:24.247125 containerd[1791]: time="2025-01-13T21:39:24.247114348Z" level=info msg="Ensure that sandbox 51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59 in task-service has been cleanup successfully" Jan 13 21:39:24.247200 containerd[1791]: time="2025-01-13T21:39:24.247189843Z" level=info msg="TearDown network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" successfully" Jan 13 21:39:24.247231 containerd[1791]: time="2025-01-13T21:39:24.247199539Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" returns successfully" Jan 13 21:39:24.247270 kubelet[3268]: I0113 21:39:24.247263 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce" Jan 13 21:39:24.247326 containerd[1791]: time="2025-01-13T21:39:24.247314920Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:24.247379 containerd[1791]: time="2025-01-13T21:39:24.247354223Z" level=info msg="TearDown network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" successfully" Jan 13 21:39:24.247407 containerd[1791]: time="2025-01-13T21:39:24.247380552Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" returns successfully" Jan 13 21:39:24.247504 containerd[1791]: time="2025-01-13T21:39:24.247494986Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" Jan 13 21:39:24.247609 containerd[1791]: time="2025-01-13T21:39:24.247595727Z" level=info msg="Ensure that sandbox 31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce in task-service has been cleanup successfully" Jan 13 21:39:24.247651 containerd[1791]: time="2025-01-13T21:39:24.247617051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:2,}" Jan 13 21:39:24.247687 containerd[1791]: time="2025-01-13T21:39:24.247678209Z" level=info msg="TearDown network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" successfully" Jan 13 21:39:24.247722 containerd[1791]: time="2025-01-13T21:39:24.247688323Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" returns successfully" Jan 13 21:39:24.247801 containerd[1791]: time="2025-01-13T21:39:24.247791644Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:24.247829 kubelet[3268]: I0113 21:39:24.247796 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a" Jan 13 21:39:24.247816 systemd[1]: run-netns-cni\x2dc2ba271d\x2d78c3\x2da3b8\x2d2b37\x2d1e741f60853b.mount: Deactivated successfully. Jan 13 21:39:24.248001 containerd[1791]: time="2025-01-13T21:39:24.247838355Z" level=info msg="TearDown network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" successfully" Jan 13 21:39:24.248001 containerd[1791]: time="2025-01-13T21:39:24.247845313Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" returns successfully" Jan 13 21:39:24.248001 containerd[1791]: time="2025-01-13T21:39:24.247971301Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" Jan 13 21:39:24.248091 containerd[1791]: time="2025-01-13T21:39:24.248041185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:2,}" Jan 13 21:39:24.248091 containerd[1791]: time="2025-01-13T21:39:24.248081623Z" level=info msg="Ensure that sandbox 44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a in task-service has been cleanup successfully" Jan 13 21:39:24.248182 containerd[1791]: time="2025-01-13T21:39:24.248173155Z" level=info msg="TearDown network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" successfully" Jan 13 21:39:24.248209 containerd[1791]: time="2025-01-13T21:39:24.248182291Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" returns successfully" Jan 13 21:39:24.248302 containerd[1791]: time="2025-01-13T21:39:24.248290173Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:24.248353 containerd[1791]: time="2025-01-13T21:39:24.248330570Z" level=info msg="TearDown network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" successfully" Jan 13 21:39:24.248387 containerd[1791]: time="2025-01-13T21:39:24.248352618Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" returns successfully" Jan 13 21:39:24.248421 kubelet[3268]: I0113 21:39:24.248376 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77" Jan 13 21:39:24.248541 containerd[1791]: time="2025-01-13T21:39:24.248532209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:2,}" Jan 13 21:39:24.248591 containerd[1791]: time="2025-01-13T21:39:24.248583785Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" Jan 13 21:39:24.248673 containerd[1791]: time="2025-01-13T21:39:24.248664419Z" level=info msg="Ensure that sandbox cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77 in task-service has been cleanup successfully" Jan 13 21:39:24.248740 containerd[1791]: time="2025-01-13T21:39:24.248732576Z" level=info msg="TearDown network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" successfully" Jan 13 21:39:24.248759 containerd[1791]: time="2025-01-13T21:39:24.248740424Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" returns successfully" Jan 13 21:39:24.248862 containerd[1791]: time="2025-01-13T21:39:24.248851694Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:24.248911 containerd[1791]: time="2025-01-13T21:39:24.248902100Z" level=info msg="TearDown network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" successfully" Jan 13 21:39:24.248946 containerd[1791]: time="2025-01-13T21:39:24.248910537Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" returns successfully" Jan 13 21:39:24.249101 containerd[1791]: time="2025-01-13T21:39:24.249089097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:2,}" Jan 13 21:39:24.249520 systemd[1]: run-netns-cni\x2d509c7efd\x2dfac9\x2db5d1\x2d11f8\x2d87571d71b436.mount: Deactivated successfully. Jan 13 21:39:24.249567 systemd[1]: run-netns-cni\x2d175c4ca2\x2da395\x2d64f3\x2d4b8c\x2dc5e33843b9e6.mount: Deactivated successfully. Jan 13 21:39:24.249600 systemd[1]: run-netns-cni\x2de767d0bc\x2d5de6\x2d11c3\x2deef3\x2de7874917bc8e.mount: Deactivated successfully. Jan 13 21:39:24.249641 systemd[1]: run-netns-cni\x2d0543af5d\x2d62f1\x2d327c\x2d792b\x2dd26b850c5fa5.mount: Deactivated successfully. Jan 13 21:39:24.251895 systemd[1]: run-netns-cni\x2d7b242dcc\x2ddaab\x2d232a\x2ddc23\x2dcc51eee19dd1.mount: Deactivated successfully. Jan 13 21:39:24.288665 containerd[1791]: time="2025-01-13T21:39:24.288630262Z" level=error msg="Failed to destroy network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.288850 containerd[1791]: time="2025-01-13T21:39:24.288832097Z" level=error msg="Failed to destroy network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.288909 containerd[1791]: time="2025-01-13T21:39:24.288887598Z" level=error msg="encountered an error cleaning up failed sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.288951 containerd[1791]: time="2025-01-13T21:39:24.288933860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289006 containerd[1791]: time="2025-01-13T21:39:24.288989834Z" level=error msg="encountered an error cleaning up failed sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289044 containerd[1791]: time="2025-01-13T21:39:24.289027594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289109 containerd[1791]: time="2025-01-13T21:39:24.288897064Z" level=error msg="Failed to destroy network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289145 kubelet[3268]: E0113 21:39:24.289108 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289393 kubelet[3268]: E0113 21:39:24.289151 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:24.289393 kubelet[3268]: E0113 21:39:24.289165 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:24.289393 kubelet[3268]: E0113 21:39:24.289108 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289501 containerd[1791]: time="2025-01-13T21:39:24.289169472Z" level=error msg="Failed to destroy network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289501 containerd[1791]: time="2025-01-13T21:39:24.289241201Z" level=error msg="encountered an error cleaning up failed sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289501 containerd[1791]: time="2025-01-13T21:39:24.289270190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289501 containerd[1791]: time="2025-01-13T21:39:24.289312907Z" level=error msg="encountered an error cleaning up failed sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289501 containerd[1791]: time="2025-01-13T21:39:24.289336613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289680 kubelet[3268]: E0113 21:39:24.289198 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dqn4h" podUID="3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28" Jan 13 21:39:24.289680 kubelet[3268]: E0113 21:39:24.289208 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:24.289680 kubelet[3268]: E0113 21:39:24.289224 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:24.289799 kubelet[3268]: E0113 21:39:24.289250 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" podUID="2fc98b87-986b-42c4-bb81-8550837ec10b" Jan 13 21:39:24.289799 kubelet[3268]: E0113 21:39:24.289350 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289799 kubelet[3268]: E0113 21:39:24.289371 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:24.289911 kubelet[3268]: E0113 21:39:24.289384 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:24.289911 kubelet[3268]: E0113 21:39:24.289399 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.289911 kubelet[3268]: E0113 21:39:24.289413 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:24.290039 kubelet[3268]: E0113 21:39:24.289416 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:24.290039 kubelet[3268]: E0113 21:39:24.289427 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:24.290039 kubelet[3268]: E0113 21:39:24.289447 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" podUID="dac5f61b-c5c7-4a9f-8108-2532f35a9873" Jan 13 21:39:24.290523 containerd[1791]: time="2025-01-13T21:39:24.290504343Z" level=error msg="Failed to destroy network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.290724 containerd[1791]: time="2025-01-13T21:39:24.290702635Z" level=error msg="encountered an error cleaning up failed sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.290764 containerd[1791]: time="2025-01-13T21:39:24.290734896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.290815 kubelet[3268]: E0113 21:39:24.290808 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.290838 kubelet[3268]: E0113 21:39:24.290824 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:24.290838 kubelet[3268]: E0113 21:39:24.290835 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:24.290879 kubelet[3268]: E0113 21:39:24.290856 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-svkr6" podUID="ec5b66ae-6390-4504-928f-0cd85519c73d" Jan 13 21:39:24.291295 containerd[1791]: time="2025-01-13T21:39:24.291279139Z" level=error msg="Failed to destroy network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.291510 containerd[1791]: time="2025-01-13T21:39:24.291464454Z" level=error msg="encountered an error cleaning up failed sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.291510 containerd[1791]: time="2025-01-13T21:39:24.291488312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.291627 kubelet[3268]: E0113 21:39:24.291606 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:24.291652 kubelet[3268]: E0113 21:39:24.291637 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:24.291652 kubelet[3268]: E0113 21:39:24.291648 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:24.291689 kubelet[3268]: E0113 21:39:24.291669 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" podUID="547ff756-0fe2-4d76-8a18-b23758fbd4a2" Jan 13 21:39:24.665538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510-shm.mount: Deactivated successfully. Jan 13 21:39:25.250125 kubelet[3268]: I0113 21:39:25.250109 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1" Jan 13 21:39:25.250423 containerd[1791]: time="2025-01-13T21:39:25.250406912Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\"" Jan 13 21:39:25.250608 containerd[1791]: time="2025-01-13T21:39:25.250542012Z" level=info msg="Ensure that sandbox 741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1 in task-service has been cleanup successfully" Jan 13 21:39:25.250654 containerd[1791]: time="2025-01-13T21:39:25.250641768Z" level=info msg="TearDown network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" successfully" Jan 13 21:39:25.250678 containerd[1791]: time="2025-01-13T21:39:25.250653685Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" returns successfully" Jan 13 21:39:25.250771 containerd[1791]: time="2025-01-13T21:39:25.250760268Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" Jan 13 21:39:25.250797 kubelet[3268]: I0113 21:39:25.250774 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c" Jan 13 21:39:25.250837 containerd[1791]: time="2025-01-13T21:39:25.250804132Z" level=info msg="TearDown network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" successfully" Jan 13 21:39:25.250859 containerd[1791]: time="2025-01-13T21:39:25.250836770Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" returns successfully" Jan 13 21:39:25.250965 containerd[1791]: time="2025-01-13T21:39:25.250953781Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:25.251031 containerd[1791]: time="2025-01-13T21:39:25.251005744Z" level=info msg="TearDown network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" successfully" Jan 13 21:39:25.251051 containerd[1791]: time="2025-01-13T21:39:25.251031091Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" returns successfully" Jan 13 21:39:25.251051 containerd[1791]: time="2025-01-13T21:39:25.251027469Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\"" Jan 13 21:39:25.251167 containerd[1791]: time="2025-01-13T21:39:25.251156013Z" level=info msg="Ensure that sandbox 911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c in task-service has been cleanup successfully" Jan 13 21:39:25.251264 containerd[1791]: time="2025-01-13T21:39:25.251253177Z" level=info msg="TearDown network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" successfully" Jan 13 21:39:25.251288 containerd[1791]: time="2025-01-13T21:39:25.251264945Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" returns successfully" Jan 13 21:39:25.251288 containerd[1791]: time="2025-01-13T21:39:25.251266730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:3,}" Jan 13 21:39:25.251387 containerd[1791]: time="2025-01-13T21:39:25.251376910Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" Jan 13 21:39:25.251419 kubelet[3268]: I0113 21:39:25.251412 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9" Jan 13 21:39:25.251443 containerd[1791]: time="2025-01-13T21:39:25.251426255Z" level=info msg="TearDown network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" successfully" Jan 13 21:39:25.251443 containerd[1791]: time="2025-01-13T21:39:25.251435707Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" returns successfully" Jan 13 21:39:25.251558 containerd[1791]: time="2025-01-13T21:39:25.251548546Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:25.251609 containerd[1791]: time="2025-01-13T21:39:25.251596905Z" level=info msg="TearDown network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" successfully" Jan 13 21:39:25.251641 containerd[1791]: time="2025-01-13T21:39:25.251608957Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" returns successfully" Jan 13 21:39:25.251641 containerd[1791]: time="2025-01-13T21:39:25.251600142Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\"" Jan 13 21:39:25.251774 containerd[1791]: time="2025-01-13T21:39:25.251762083Z" level=info msg="Ensure that sandbox f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9 in task-service has been cleanup successfully" Jan 13 21:39:25.251802 containerd[1791]: time="2025-01-13T21:39:25.251791347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:3,}" Jan 13 21:39:25.251889 containerd[1791]: time="2025-01-13T21:39:25.251876528Z" level=info msg="TearDown network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" successfully" Jan 13 21:39:25.251922 containerd[1791]: time="2025-01-13T21:39:25.251888274Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" returns successfully" Jan 13 21:39:25.252032 containerd[1791]: time="2025-01-13T21:39:25.252021044Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" Jan 13 21:39:25.252078 containerd[1791]: time="2025-01-13T21:39:25.252068807Z" level=info msg="TearDown network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" successfully" Jan 13 21:39:25.252078 containerd[1791]: time="2025-01-13T21:39:25.252077774Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" returns successfully" Jan 13 21:39:25.252151 kubelet[3268]: I0113 21:39:25.252139 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c" Jan 13 21:39:25.252201 containerd[1791]: time="2025-01-13T21:39:25.252186313Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:25.252251 containerd[1791]: time="2025-01-13T21:39:25.252240697Z" level=info msg="TearDown network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" successfully" Jan 13 21:39:25.252282 containerd[1791]: time="2025-01-13T21:39:25.252251743Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" returns successfully" Jan 13 21:39:25.252386 systemd[1]: run-netns-cni\x2de5b28692\x2d45e7\x2d454e\x2d95f4\x2d63d59a9d3df1.mount: Deactivated successfully. Jan 13 21:39:25.252520 containerd[1791]: time="2025-01-13T21:39:25.252386052Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\"" Jan 13 21:39:25.252520 containerd[1791]: time="2025-01-13T21:39:25.252407257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:3,}" Jan 13 21:39:25.252520 containerd[1791]: time="2025-01-13T21:39:25.252502581Z" level=info msg="Ensure that sandbox ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c in task-service has been cleanup successfully" Jan 13 21:39:25.252624 containerd[1791]: time="2025-01-13T21:39:25.252612923Z" level=info msg="TearDown network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" successfully" Jan 13 21:39:25.252649 containerd[1791]: time="2025-01-13T21:39:25.252624555Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" returns successfully" Jan 13 21:39:25.252763 containerd[1791]: time="2025-01-13T21:39:25.252748127Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" Jan 13 21:39:25.252814 containerd[1791]: time="2025-01-13T21:39:25.252804522Z" level=info msg="TearDown network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" successfully" Jan 13 21:39:25.252841 containerd[1791]: time="2025-01-13T21:39:25.252815072Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" returns successfully" Jan 13 21:39:25.252947 kubelet[3268]: I0113 21:39:25.252936 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510" Jan 13 21:39:25.252989 containerd[1791]: time="2025-01-13T21:39:25.252946875Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:25.253041 containerd[1791]: time="2025-01-13T21:39:25.253020524Z" level=info msg="TearDown network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" successfully" Jan 13 21:39:25.253041 containerd[1791]: time="2025-01-13T21:39:25.253031246Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" returns successfully" Jan 13 21:39:25.253205 containerd[1791]: time="2025-01-13T21:39:25.253194461Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\"" Jan 13 21:39:25.253233 containerd[1791]: time="2025-01-13T21:39:25.253221708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:3,}" Jan 13 21:39:25.253325 containerd[1791]: time="2025-01-13T21:39:25.253314766Z" level=info msg="Ensure that sandbox db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510 in task-service has been cleanup successfully" Jan 13 21:39:25.253419 containerd[1791]: time="2025-01-13T21:39:25.253409424Z" level=info msg="TearDown network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" successfully" Jan 13 21:39:25.253436 containerd[1791]: time="2025-01-13T21:39:25.253420350Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" returns successfully" Jan 13 21:39:25.253513 containerd[1791]: time="2025-01-13T21:39:25.253504563Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" Jan 13 21:39:25.253551 containerd[1791]: time="2025-01-13T21:39:25.253544944Z" level=info msg="TearDown network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" successfully" Jan 13 21:39:25.253570 containerd[1791]: time="2025-01-13T21:39:25.253551659Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" returns successfully" Jan 13 21:39:25.253695 containerd[1791]: time="2025-01-13T21:39:25.253684199Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:25.253742 containerd[1791]: time="2025-01-13T21:39:25.253734118Z" level=info msg="TearDown network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" successfully" Jan 13 21:39:25.253762 containerd[1791]: time="2025-01-13T21:39:25.253743710Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" returns successfully" Jan 13 21:39:25.253792 kubelet[3268]: I0113 21:39:25.253784 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be" Jan 13 21:39:25.253887 containerd[1791]: time="2025-01-13T21:39:25.253877879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:3,}" Jan 13 21:39:25.254014 containerd[1791]: time="2025-01-13T21:39:25.253997886Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\"" Jan 13 21:39:25.254094 containerd[1791]: time="2025-01-13T21:39:25.254085069Z" level=info msg="Ensure that sandbox 31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be in task-service has been cleanup successfully" Jan 13 21:39:25.254161 containerd[1791]: time="2025-01-13T21:39:25.254153780Z" level=info msg="TearDown network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" successfully" Jan 13 21:39:25.254179 containerd[1791]: time="2025-01-13T21:39:25.254161858Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" returns successfully" Jan 13 21:39:25.254276 containerd[1791]: time="2025-01-13T21:39:25.254267856Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" Jan 13 21:39:25.254316 containerd[1791]: time="2025-01-13T21:39:25.254307352Z" level=info msg="TearDown network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" successfully" Jan 13 21:39:25.254337 containerd[1791]: time="2025-01-13T21:39:25.254316993Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" returns successfully" Jan 13 21:39:25.254454 containerd[1791]: time="2025-01-13T21:39:25.254442918Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:25.254505 containerd[1791]: time="2025-01-13T21:39:25.254493715Z" level=info msg="TearDown network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" successfully" Jan 13 21:39:25.254543 containerd[1791]: time="2025-01-13T21:39:25.254504755Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" returns successfully" Jan 13 21:39:25.254586 systemd[1]: run-netns-cni\x2d3c2647cb\x2d3613\x2d4c58\x2df06a\x2d2fabddaf23c0.mount: Deactivated successfully. Jan 13 21:39:25.254653 systemd[1]: run-netns-cni\x2d064a520f\x2de514\x2de6ad\x2d336d\x2d6319b5ff1993.mount: Deactivated successfully. Jan 13 21:39:25.254694 containerd[1791]: time="2025-01-13T21:39:25.254675023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:3,}" Jan 13 21:39:25.254706 systemd[1]: run-netns-cni\x2df9d92e94\x2d2c32\x2dafc3\x2d51d3\x2d6bc22e2b359e.mount: Deactivated successfully. Jan 13 21:39:25.257306 systemd[1]: run-netns-cni\x2d2f119e18\x2d2623\x2dd998\x2dd2be\x2dbe84dd9a5215.mount: Deactivated successfully. Jan 13 21:39:25.257383 systemd[1]: run-netns-cni\x2d6a60a113\x2db928\x2dd682\x2db20e\x2d1ca8c70ba932.mount: Deactivated successfully. Jan 13 21:39:25.296083 containerd[1791]: time="2025-01-13T21:39:25.296052921Z" level=error msg="Failed to destroy network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296199 containerd[1791]: time="2025-01-13T21:39:25.296067941Z" level=error msg="Failed to destroy network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296302 containerd[1791]: time="2025-01-13T21:39:25.296284197Z" level=error msg="encountered an error cleaning up failed sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296354 containerd[1791]: time="2025-01-13T21:39:25.296292463Z" level=error msg="encountered an error cleaning up failed sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296354 containerd[1791]: time="2025-01-13T21:39:25.296329031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296354 containerd[1791]: time="2025-01-13T21:39:25.296340400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296505 containerd[1791]: time="2025-01-13T21:39:25.296487052Z" level=error msg="Failed to destroy network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296560 kubelet[3268]: E0113 21:39:25.296527 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296560 kubelet[3268]: E0113 21:39:25.296530 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.296560 kubelet[3268]: E0113 21:39:25.296560 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:25.296866 kubelet[3268]: E0113 21:39:25.296560 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:25.296866 kubelet[3268]: E0113 21:39:25.296576 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:25.296866 kubelet[3268]: E0113 21:39:25.296576 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:25.296960 kubelet[3268]: E0113 21:39:25.296607 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-svkr6" podUID="ec5b66ae-6390-4504-928f-0cd85519c73d" Jan 13 21:39:25.296960 kubelet[3268]: E0113 21:39:25.296614 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" podUID="dac5f61b-c5c7-4a9f-8108-2532f35a9873" Jan 13 21:39:25.297074 containerd[1791]: time="2025-01-13T21:39:25.296541341Z" level=error msg="Failed to destroy network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297113 containerd[1791]: time="2025-01-13T21:39:25.296980538Z" level=error msg="Failed to destroy network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297187 containerd[1791]: time="2025-01-13T21:39:25.297172020Z" level=error msg="encountered an error cleaning up failed sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297224 containerd[1791]: time="2025-01-13T21:39:25.297196389Z" level=error msg="encountered an error cleaning up failed sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297245 containerd[1791]: time="2025-01-13T21:39:25.297231974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297283 containerd[1791]: time="2025-01-13T21:39:25.297249092Z" level=error msg="encountered an error cleaning up failed sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297283 containerd[1791]: time="2025-01-13T21:39:25.297208176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297321 containerd[1791]: time="2025-01-13T21:39:25.297272701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297352 kubelet[3268]: E0113 21:39:25.297333 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297352 kubelet[3268]: E0113 21:39:25.297350 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:25.297393 kubelet[3268]: E0113 21:39:25.297363 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:25.297393 kubelet[3268]: E0113 21:39:25.297374 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297393 kubelet[3268]: E0113 21:39:25.297383 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.297393 kubelet[3268]: E0113 21:39:25.297387 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:25.297490 kubelet[3268]: E0113 21:39:25.297401 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:25.297490 kubelet[3268]: E0113 21:39:25.297402 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:25.297490 kubelet[3268]: E0113 21:39:25.297417 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:25.297490 kubelet[3268]: E0113 21:39:25.297420 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:25.297567 kubelet[3268]: E0113 21:39:25.297439 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" podUID="2fc98b87-986b-42c4-bb81-8550837ec10b" Jan 13 21:39:25.297567 kubelet[3268]: E0113 21:39:25.297450 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" podUID="547ff756-0fe2-4d76-8a18-b23758fbd4a2" Jan 13 21:39:25.298179 containerd[1791]: time="2025-01-13T21:39:25.298162183Z" level=error msg="Failed to destroy network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.298366 containerd[1791]: time="2025-01-13T21:39:25.298320911Z" level=error msg="encountered an error cleaning up failed sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.298366 containerd[1791]: time="2025-01-13T21:39:25.298351076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.298444 kubelet[3268]: E0113 21:39:25.298434 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:25.298481 kubelet[3268]: E0113 21:39:25.298464 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:25.298514 kubelet[3268]: E0113 21:39:25.298484 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:25.298544 kubelet[3268]: E0113 21:39:25.298521 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dqn4h" podUID="3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28" Jan 13 21:39:26.255832 kubelet[3268]: I0113 21:39:26.255792 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f" Jan 13 21:39:26.256146 containerd[1791]: time="2025-01-13T21:39:26.256113560Z" level=info msg="StopPodSandbox for \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\"" Jan 13 21:39:26.256385 containerd[1791]: time="2025-01-13T21:39:26.256263231Z" level=info msg="Ensure that sandbox 377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f in task-service has been cleanup successfully" Jan 13 21:39:26.256385 containerd[1791]: time="2025-01-13T21:39:26.256354115Z" level=info msg="TearDown network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" successfully" Jan 13 21:39:26.256385 containerd[1791]: time="2025-01-13T21:39:26.256362404Z" level=info msg="StopPodSandbox for \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" returns successfully" Jan 13 21:39:26.256454 kubelet[3268]: I0113 21:39:26.256444 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf" Jan 13 21:39:26.256509 containerd[1791]: time="2025-01-13T21:39:26.256494811Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\"" Jan 13 21:39:26.256591 containerd[1791]: time="2025-01-13T21:39:26.256555192Z" level=info msg="TearDown network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" successfully" Jan 13 21:39:26.256618 containerd[1791]: time="2025-01-13T21:39:26.256590913Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" returns successfully" Jan 13 21:39:26.256683 containerd[1791]: time="2025-01-13T21:39:26.256673000Z" level=info msg="StopPodSandbox for \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\"" Jan 13 21:39:26.256738 containerd[1791]: time="2025-01-13T21:39:26.256723989Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" Jan 13 21:39:26.256800 containerd[1791]: time="2025-01-13T21:39:26.256771957Z" level=info msg="TearDown network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" successfully" Jan 13 21:39:26.256830 containerd[1791]: time="2025-01-13T21:39:26.256800005Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" returns successfully" Jan 13 21:39:26.256830 containerd[1791]: time="2025-01-13T21:39:26.256772080Z" level=info msg="Ensure that sandbox 30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf in task-service has been cleanup successfully" Jan 13 21:39:26.256908 containerd[1791]: time="2025-01-13T21:39:26.256899927Z" level=info msg="TearDown network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" successfully" Jan 13 21:39:26.256908 containerd[1791]: time="2025-01-13T21:39:26.256907829Z" level=info msg="StopPodSandbox for \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" returns successfully" Jan 13 21:39:26.256945 containerd[1791]: time="2025-01-13T21:39:26.256930909Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:26.256987 containerd[1791]: time="2025-01-13T21:39:26.256975811Z" level=info msg="TearDown network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" successfully" Jan 13 21:39:26.257035 containerd[1791]: time="2025-01-13T21:39:26.256986997Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" returns successfully" Jan 13 21:39:26.257035 containerd[1791]: time="2025-01-13T21:39:26.257019213Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\"" Jan 13 21:39:26.257070 containerd[1791]: time="2025-01-13T21:39:26.257052587Z" level=info msg="TearDown network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" successfully" Jan 13 21:39:26.257070 containerd[1791]: time="2025-01-13T21:39:26.257058460Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" returns successfully" Jan 13 21:39:26.257187 containerd[1791]: time="2025-01-13T21:39:26.257178047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:4,}" Jan 13 21:39:26.257210 containerd[1791]: time="2025-01-13T21:39:26.257197598Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" Jan 13 21:39:26.257234 containerd[1791]: time="2025-01-13T21:39:26.257228032Z" level=info msg="TearDown network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" successfully" Jan 13 21:39:26.257252 containerd[1791]: time="2025-01-13T21:39:26.257233887Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" returns successfully" Jan 13 21:39:26.257269 kubelet[3268]: I0113 21:39:26.257246 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b" Jan 13 21:39:26.257362 containerd[1791]: time="2025-01-13T21:39:26.257352663Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:26.257396 containerd[1791]: time="2025-01-13T21:39:26.257388769Z" level=info msg="TearDown network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" successfully" Jan 13 21:39:26.257417 containerd[1791]: time="2025-01-13T21:39:26.257395429Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" returns successfully" Jan 13 21:39:26.257470 containerd[1791]: time="2025-01-13T21:39:26.257456995Z" level=info msg="StopPodSandbox for \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\"" Jan 13 21:39:26.257550 containerd[1791]: time="2025-01-13T21:39:26.257539701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:4,}" Jan 13 21:39:26.257569 containerd[1791]: time="2025-01-13T21:39:26.257562490Z" level=info msg="Ensure that sandbox 4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b in task-service has been cleanup successfully" Jan 13 21:39:26.257655 containerd[1791]: time="2025-01-13T21:39:26.257643158Z" level=info msg="TearDown network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" successfully" Jan 13 21:39:26.257677 containerd[1791]: time="2025-01-13T21:39:26.257654822Z" level=info msg="StopPodSandbox for \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" returns successfully" Jan 13 21:39:26.257756 containerd[1791]: time="2025-01-13T21:39:26.257745010Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\"" Jan 13 21:39:26.257809 systemd[1]: run-netns-cni\x2d96a5dcac\x2d4349\x2d9682\x2d0c04\x2d750e73c6109d.mount: Deactivated successfully. Jan 13 21:39:26.257921 containerd[1791]: time="2025-01-13T21:39:26.257789611Z" level=info msg="TearDown network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" successfully" Jan 13 21:39:26.257921 containerd[1791]: time="2025-01-13T21:39:26.257814786Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" returns successfully" Jan 13 21:39:26.257955 containerd[1791]: time="2025-01-13T21:39:26.257927047Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" Jan 13 21:39:26.258144 containerd[1791]: time="2025-01-13T21:39:26.257986499Z" level=info msg="TearDown network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" successfully" Jan 13 21:39:26.258178 containerd[1791]: time="2025-01-13T21:39:26.258144557Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" returns successfully" Jan 13 21:39:26.258282 kubelet[3268]: I0113 21:39:26.258266 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817" Jan 13 21:39:26.258885 containerd[1791]: time="2025-01-13T21:39:26.258494641Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:26.258885 containerd[1791]: time="2025-01-13T21:39:26.258568735Z" level=info msg="TearDown network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" successfully" Jan 13 21:39:26.258885 containerd[1791]: time="2025-01-13T21:39:26.258597336Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" returns successfully" Jan 13 21:39:26.258885 containerd[1791]: time="2025-01-13T21:39:26.258654632Z" level=info msg="StopPodSandbox for \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\"" Jan 13 21:39:26.258885 containerd[1791]: time="2025-01-13T21:39:26.258806422Z" level=info msg="Ensure that sandbox a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817 in task-service has been cleanup successfully" Jan 13 21:39:26.259084 containerd[1791]: time="2025-01-13T21:39:26.259070169Z" level=info msg="TearDown network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" successfully" Jan 13 21:39:26.259084 containerd[1791]: time="2025-01-13T21:39:26.259081904Z" level=info msg="StopPodSandbox for \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" returns successfully" Jan 13 21:39:26.259300 containerd[1791]: time="2025-01-13T21:39:26.259285445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:4,}" Jan 13 21:39:26.259350 containerd[1791]: time="2025-01-13T21:39:26.259336632Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\"" Jan 13 21:39:26.259589 containerd[1791]: time="2025-01-13T21:39:26.259559040Z" level=info msg="TearDown network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" successfully" Jan 13 21:39:26.259589 containerd[1791]: time="2025-01-13T21:39:26.259587533Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" returns successfully" Jan 13 21:39:26.259731 containerd[1791]: time="2025-01-13T21:39:26.259721924Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" Jan 13 21:39:26.259787 containerd[1791]: time="2025-01-13T21:39:26.259761404Z" level=info msg="TearDown network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" successfully" Jan 13 21:39:26.259787 containerd[1791]: time="2025-01-13T21:39:26.259779253Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" returns successfully" Jan 13 21:39:26.259842 kubelet[3268]: I0113 21:39:26.259833 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f" Jan 13 21:39:26.259972 containerd[1791]: time="2025-01-13T21:39:26.259961621Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:26.260016 containerd[1791]: time="2025-01-13T21:39:26.260006558Z" level=info msg="TearDown network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" successfully" Jan 13 21:39:26.260042 containerd[1791]: time="2025-01-13T21:39:26.260015813Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" returns successfully" Jan 13 21:39:26.260072 containerd[1791]: time="2025-01-13T21:39:26.260052070Z" level=info msg="StopPodSandbox for \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\"" Jan 13 21:39:26.260115 systemd[1]: run-netns-cni\x2d2bf334a7\x2dd962\x2d64eb\x2d57ec\x2dc4f341161eb4.mount: Deactivated successfully. Jan 13 21:39:26.260171 systemd[1]: run-netns-cni\x2d9d306876\x2d45e2\x2da72b\x2d7de2\x2d9726d57088e8.mount: Deactivated successfully. Jan 13 21:39:26.260234 containerd[1791]: time="2025-01-13T21:39:26.260162387Z" level=info msg="Ensure that sandbox 87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f in task-service has been cleanup successfully" Jan 13 21:39:26.260234 containerd[1791]: time="2025-01-13T21:39:26.260188979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:4,}" Jan 13 21:39:26.260299 containerd[1791]: time="2025-01-13T21:39:26.260249545Z" level=info msg="TearDown network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" successfully" Jan 13 21:39:26.260299 containerd[1791]: time="2025-01-13T21:39:26.260260432Z" level=info msg="StopPodSandbox for \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" returns successfully" Jan 13 21:39:26.260371 containerd[1791]: time="2025-01-13T21:39:26.260360210Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\"" Jan 13 21:39:26.260407 containerd[1791]: time="2025-01-13T21:39:26.260397980Z" level=info msg="TearDown network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" successfully" Jan 13 21:39:26.260426 containerd[1791]: time="2025-01-13T21:39:26.260407602Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" returns successfully" Jan 13 21:39:26.260541 containerd[1791]: time="2025-01-13T21:39:26.260527412Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" Jan 13 21:39:26.260595 containerd[1791]: time="2025-01-13T21:39:26.260577047Z" level=info msg="TearDown network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" successfully" Jan 13 21:39:26.260595 containerd[1791]: time="2025-01-13T21:39:26.260589564Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" returns successfully" Jan 13 21:39:26.260684 kubelet[3268]: I0113 21:39:26.260675 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0" Jan 13 21:39:26.260724 containerd[1791]: time="2025-01-13T21:39:26.260686507Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:26.260744 containerd[1791]: time="2025-01-13T21:39:26.260726055Z" level=info msg="TearDown network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" successfully" Jan 13 21:39:26.260744 containerd[1791]: time="2025-01-13T21:39:26.260732394Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" returns successfully" Jan 13 21:39:26.260910 containerd[1791]: time="2025-01-13T21:39:26.260895998Z" level=info msg="StopPodSandbox for \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\"" Jan 13 21:39:26.260962 containerd[1791]: time="2025-01-13T21:39:26.260910352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:4,}" Jan 13 21:39:26.261041 containerd[1791]: time="2025-01-13T21:39:26.261027728Z" level=info msg="Ensure that sandbox e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0 in task-service has been cleanup successfully" Jan 13 21:39:26.261121 containerd[1791]: time="2025-01-13T21:39:26.261111120Z" level=info msg="TearDown network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" successfully" Jan 13 21:39:26.261142 containerd[1791]: time="2025-01-13T21:39:26.261122424Z" level=info msg="StopPodSandbox for \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" returns successfully" Jan 13 21:39:26.261245 containerd[1791]: time="2025-01-13T21:39:26.261235561Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\"" Jan 13 21:39:26.261289 containerd[1791]: time="2025-01-13T21:39:26.261281038Z" level=info msg="TearDown network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" successfully" Jan 13 21:39:26.261312 containerd[1791]: time="2025-01-13T21:39:26.261289542Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" returns successfully" Jan 13 21:39:26.261403 containerd[1791]: time="2025-01-13T21:39:26.261394321Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" Jan 13 21:39:26.261439 containerd[1791]: time="2025-01-13T21:39:26.261432282Z" level=info msg="TearDown network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" successfully" Jan 13 21:39:26.261458 containerd[1791]: time="2025-01-13T21:39:26.261439042Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" returns successfully" Jan 13 21:39:26.261541 containerd[1791]: time="2025-01-13T21:39:26.261531309Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:26.261584 containerd[1791]: time="2025-01-13T21:39:26.261576053Z" level=info msg="TearDown network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" successfully" Jan 13 21:39:26.261618 containerd[1791]: time="2025-01-13T21:39:26.261584025Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" returns successfully" Jan 13 21:39:26.261752 containerd[1791]: time="2025-01-13T21:39:26.261743842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:4,}" Jan 13 21:39:26.262193 systemd[1]: run-netns-cni\x2d4a27dd44\x2d69fc\x2d537f\x2d8eeb\x2d438eab61b4ad.mount: Deactivated successfully. Jan 13 21:39:26.262240 systemd[1]: run-netns-cni\x2dfa6d8de5\x2da423\x2d5aaa\x2dbaee\x2d2eb8873bdc4b.mount: Deactivated successfully. Jan 13 21:39:26.264285 systemd[1]: run-netns-cni\x2d72f5569a\x2d71db\x2db150\x2d790f\x2dabd41089fb51.mount: Deactivated successfully. Jan 13 21:39:26.304958 containerd[1791]: time="2025-01-13T21:39:26.304926656Z" level=error msg="Failed to destroy network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.305732 containerd[1791]: time="2025-01-13T21:39:26.305171078Z" level=error msg="Failed to destroy network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.305806 containerd[1791]: time="2025-01-13T21:39:26.305453592Z" level=error msg="encountered an error cleaning up failed sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.305868 containerd[1791]: time="2025-01-13T21:39:26.305850279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.305954 containerd[1791]: time="2025-01-13T21:39:26.305930925Z" level=error msg="encountered an error cleaning up failed sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.306001 containerd[1791]: time="2025-01-13T21:39:26.305981461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.306151 kubelet[3268]: E0113 21:39:26.306128 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.306420 kubelet[3268]: E0113 21:39:26.306193 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:26.306420 kubelet[3268]: E0113 21:39:26.306216 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqn4h" Jan 13 21:39:26.306420 kubelet[3268]: E0113 21:39:26.306274 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dqn4h_kube-system(3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dqn4h" podUID="3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28" Jan 13 21:39:26.306542 kubelet[3268]: E0113 21:39:26.306128 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.306542 kubelet[3268]: E0113 21:39:26.306519 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:26.306542 kubelet[3268]: E0113 21:39:26.306536 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" Jan 13 21:39:26.306634 kubelet[3268]: E0113 21:39:26.306565 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-rmn7p_calico-apiserver(dac5f61b-c5c7-4a9f-8108-2532f35a9873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" podUID="dac5f61b-c5c7-4a9f-8108-2532f35a9873" Jan 13 21:39:26.313631 containerd[1791]: time="2025-01-13T21:39:26.313601558Z" level=error msg="Failed to destroy network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.313795 containerd[1791]: time="2025-01-13T21:39:26.313781929Z" level=error msg="encountered an error cleaning up failed sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.313827 containerd[1791]: time="2025-01-13T21:39:26.313817154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.313965 kubelet[3268]: E0113 21:39:26.313953 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314024 kubelet[3268]: E0113 21:39:26.313990 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:26.314024 kubelet[3268]: E0113 21:39:26.314012 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" Jan 13 21:39:26.314087 kubelet[3268]: E0113 21:39:26.314052 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4cfd48dc-t2q7n_calico-apiserver(547ff756-0fe2-4d76-8a18-b23758fbd4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" podUID="547ff756-0fe2-4d76-8a18-b23758fbd4a2" Jan 13 21:39:26.314531 containerd[1791]: time="2025-01-13T21:39:26.314515474Z" level=error msg="Failed to destroy network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314580 containerd[1791]: time="2025-01-13T21:39:26.314565186Z" level=error msg="Failed to destroy network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314688 containerd[1791]: time="2025-01-13T21:39:26.314676723Z" level=error msg="encountered an error cleaning up failed sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314713 containerd[1791]: time="2025-01-13T21:39:26.314703772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314765 containerd[1791]: time="2025-01-13T21:39:26.314747269Z" level=error msg="encountered an error cleaning up failed sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314807 containerd[1791]: time="2025-01-13T21:39:26.314781118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314845 kubelet[3268]: E0113 21:39:26.314798 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314845 kubelet[3268]: E0113 21:39:26.314832 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:26.314883 kubelet[3268]: E0113 21:39:26.314850 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-svkr6" Jan 13 21:39:26.314883 kubelet[3268]: E0113 21:39:26.314861 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.314883 kubelet[3268]: E0113 21:39:26.314881 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:26.314937 kubelet[3268]: E0113 21:39:26.314886 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-svkr6_kube-system(ec5b66ae-6390-4504-928f-0cd85519c73d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-svkr6" podUID="ec5b66ae-6390-4504-928f-0cd85519c73d" Jan 13 21:39:26.314937 kubelet[3268]: E0113 21:39:26.314895 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" Jan 13 21:39:26.314937 kubelet[3268]: E0113 21:39:26.314918 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b7795d5b4-82g4w_calico-system(2fc98b87-986b-42c4-bb81-8550837ec10b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" podUID="2fc98b87-986b-42c4-bb81-8550837ec10b" Jan 13 21:39:26.317194 containerd[1791]: time="2025-01-13T21:39:26.317177825Z" level=error msg="Failed to destroy network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.317344 containerd[1791]: time="2025-01-13T21:39:26.317331381Z" level=error msg="encountered an error cleaning up failed sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.317385 containerd[1791]: time="2025-01-13T21:39:26.317363267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.317473 kubelet[3268]: E0113 21:39:26.317455 3268 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:39:26.317512 kubelet[3268]: E0113 21:39:26.317475 3268 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:26.317512 kubelet[3268]: E0113 21:39:26.317488 3268 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhvsz" Jan 13 21:39:26.317560 kubelet[3268]: E0113 21:39:26.317514 3268 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhvsz_calico-system(d53850a4-8d71-4e78-b476-3e30d51488fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhvsz" podUID="d53850a4-8d71-4e78-b476-3e30d51488fd" Jan 13 21:39:26.468253 containerd[1791]: time="2025-01-13T21:39:26.468200125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:26.468459 containerd[1791]: time="2025-01-13T21:39:26.468417649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:39:26.468784 containerd[1791]: time="2025-01-13T21:39:26.468742087Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:26.469579 containerd[1791]: time="2025-01-13T21:39:26.469566202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:26.469981 containerd[1791]: time="2025-01-13T21:39:26.469970362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.230608099s" Jan 13 21:39:26.470010 containerd[1791]: time="2025-01-13T21:39:26.469984473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:39:26.473316 containerd[1791]: time="2025-01-13T21:39:26.473299702Z" level=info msg="CreateContainer within sandbox \"451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:39:26.481924 containerd[1791]: time="2025-01-13T21:39:26.481882621Z" level=info msg="CreateContainer within sandbox \"451bad7b0f68715007d634155c85c0b90cd8144303927c496bba4a80d7ef9c1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e074108f3e1da359c198d1c82a5667c40e969781637be856493a0ab154881893\"" Jan 13 21:39:26.482103 containerd[1791]: time="2025-01-13T21:39:26.482069425Z" level=info msg="StartContainer for \"e074108f3e1da359c198d1c82a5667c40e969781637be856493a0ab154881893\"" Jan 13 21:39:26.507270 systemd[1]: Started cri-containerd-e074108f3e1da359c198d1c82a5667c40e969781637be856493a0ab154881893.scope - libcontainer container e074108f3e1da359c198d1c82a5667c40e969781637be856493a0ab154881893. Jan 13 21:39:26.513980 kubelet[3268]: I0113 21:39:26.513936 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:39:26.526188 containerd[1791]: time="2025-01-13T21:39:26.526163978Z" level=info msg="StartContainer for \"e074108f3e1da359c198d1c82a5667c40e969781637be856493a0ab154881893\" returns successfully" Jan 13 21:39:26.588593 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:39:26.588647 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:39:26.666859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042982426.mount: Deactivated successfully. Jan 13 21:39:27.263318 kubelet[3268]: I0113 21:39:27.263268 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137" Jan 13 21:39:27.263620 containerd[1791]: time="2025-01-13T21:39:27.263591517Z" level=info msg="StopPodSandbox for \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\"" Jan 13 21:39:27.263908 containerd[1791]: time="2025-01-13T21:39:27.263770777Z" level=info msg="Ensure that sandbox 88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137 in task-service has been cleanup successfully" Jan 13 21:39:27.263949 containerd[1791]: time="2025-01-13T21:39:27.263907996Z" level=info msg="TearDown network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\" successfully" Jan 13 21:39:27.263949 containerd[1791]: time="2025-01-13T21:39:27.263920208Z" level=info msg="StopPodSandbox for \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\" returns successfully" Jan 13 21:39:27.264091 containerd[1791]: time="2025-01-13T21:39:27.264075322Z" level=info msg="StopPodSandbox for \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\"" Jan 13 21:39:27.264150 containerd[1791]: time="2025-01-13T21:39:27.264138562Z" level=info msg="TearDown network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" successfully" Jan 13 21:39:27.264150 containerd[1791]: time="2025-01-13T21:39:27.264149050Z" level=info msg="StopPodSandbox for \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" returns successfully" Jan 13 21:39:27.264372 containerd[1791]: time="2025-01-13T21:39:27.264354407Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\"" Jan 13 21:39:27.264428 containerd[1791]: time="2025-01-13T21:39:27.264415592Z" level=info msg="TearDown network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" successfully" Jan 13 21:39:27.264466 containerd[1791]: time="2025-01-13T21:39:27.264426552Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" returns successfully" Jan 13 21:39:27.264550 kubelet[3268]: I0113 21:39:27.264538 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0" Jan 13 21:39:27.264627 containerd[1791]: time="2025-01-13T21:39:27.264608940Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" Jan 13 21:39:27.264709 containerd[1791]: time="2025-01-13T21:39:27.264692526Z" level=info msg="TearDown network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" successfully" Jan 13 21:39:27.264755 containerd[1791]: time="2025-01-13T21:39:27.264709179Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" returns successfully" Jan 13 21:39:27.264833 containerd[1791]: time="2025-01-13T21:39:27.264817486Z" level=info msg="StopPodSandbox for \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\"" Jan 13 21:39:27.264918 containerd[1791]: time="2025-01-13T21:39:27.264899682Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:27.264963 containerd[1791]: time="2025-01-13T21:39:27.264947626Z" level=info msg="Ensure that sandbox ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0 in task-service has been cleanup successfully" Jan 13 21:39:27.264992 containerd[1791]: time="2025-01-13T21:39:27.264975430Z" level=info msg="TearDown network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" successfully" Jan 13 21:39:27.264992 containerd[1791]: time="2025-01-13T21:39:27.264986079Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" returns successfully" Jan 13 21:39:27.265075 containerd[1791]: time="2025-01-13T21:39:27.265062579Z" level=info msg="TearDown network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\" successfully" Jan 13 21:39:27.265110 containerd[1791]: time="2025-01-13T21:39:27.265074784Z" level=info msg="StopPodSandbox for \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\" returns successfully" Jan 13 21:39:27.265236 containerd[1791]: time="2025-01-13T21:39:27.265219574Z" level=info msg="StopPodSandbox for \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\"" Jan 13 21:39:27.265277 containerd[1791]: time="2025-01-13T21:39:27.265234136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:5,}" Jan 13 21:39:27.265306 containerd[1791]: time="2025-01-13T21:39:27.265286772Z" level=info msg="TearDown network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" successfully" Jan 13 21:39:27.265306 containerd[1791]: time="2025-01-13T21:39:27.265298251Z" level=info msg="StopPodSandbox for \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" returns successfully" Jan 13 21:39:27.265537 containerd[1791]: time="2025-01-13T21:39:27.265518118Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\"" Jan 13 21:39:27.265606 containerd[1791]: time="2025-01-13T21:39:27.265595446Z" level=info msg="TearDown network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" successfully" Jan 13 21:39:27.265649 containerd[1791]: time="2025-01-13T21:39:27.265606886Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" returns successfully" Jan 13 21:39:27.265756 containerd[1791]: time="2025-01-13T21:39:27.265745113Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" Jan 13 21:39:27.265805 containerd[1791]: time="2025-01-13T21:39:27.265795870Z" level=info msg="TearDown network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" successfully" Jan 13 21:39:27.265823 containerd[1791]: time="2025-01-13T21:39:27.265805778Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" returns successfully" Jan 13 21:39:27.265841 kubelet[3268]: I0113 21:39:27.265820 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e" Jan 13 21:39:27.265926 containerd[1791]: time="2025-01-13T21:39:27.265915815Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:27.265970 containerd[1791]: time="2025-01-13T21:39:27.265962205Z" level=info msg="TearDown network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" successfully" Jan 13 21:39:27.265999 containerd[1791]: time="2025-01-13T21:39:27.265971699Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" returns successfully" Jan 13 21:39:27.266047 containerd[1791]: time="2025-01-13T21:39:27.266036412Z" level=info msg="StopPodSandbox for \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\"" Jan 13 21:39:27.266088 systemd[1]: run-netns-cni\x2d63b5605c\x2d0597\x2d9658\x2dc06d\x2d59ba23d6b576.mount: Deactivated successfully. Jan 13 21:39:27.266215 containerd[1791]: time="2025-01-13T21:39:27.266157958Z" level=info msg="Ensure that sandbox 23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e in task-service has been cleanup successfully" Jan 13 21:39:27.266215 containerd[1791]: time="2025-01-13T21:39:27.266191731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:5,}" Jan 13 21:39:27.266277 containerd[1791]: time="2025-01-13T21:39:27.266267268Z" level=info msg="TearDown network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\" successfully" Jan 13 21:39:27.266295 containerd[1791]: time="2025-01-13T21:39:27.266278816Z" level=info msg="StopPodSandbox for \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\" returns successfully" Jan 13 21:39:27.266482 containerd[1791]: time="2025-01-13T21:39:27.266470251Z" level=info msg="StopPodSandbox for \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\"" Jan 13 21:39:27.266527 containerd[1791]: time="2025-01-13T21:39:27.266518324Z" level=info msg="TearDown network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" successfully" Jan 13 21:39:27.266545 containerd[1791]: time="2025-01-13T21:39:27.266528379Z" level=info msg="StopPodSandbox for \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" returns successfully" Jan 13 21:39:27.266637 containerd[1791]: time="2025-01-13T21:39:27.266624870Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\"" Jan 13 21:39:27.266685 containerd[1791]: time="2025-01-13T21:39:27.266676304Z" level=info msg="TearDown network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" successfully" Jan 13 21:39:27.266715 containerd[1791]: time="2025-01-13T21:39:27.266684555Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" returns successfully" Jan 13 21:39:27.266783 kubelet[3268]: I0113 21:39:27.266773 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c" Jan 13 21:39:27.266810 containerd[1791]: time="2025-01-13T21:39:27.266787976Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" Jan 13 21:39:27.266836 containerd[1791]: time="2025-01-13T21:39:27.266829537Z" level=info msg="TearDown network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" successfully" Jan 13 21:39:27.266854 containerd[1791]: time="2025-01-13T21:39:27.266836397Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" returns successfully" Jan 13 21:39:27.267022 containerd[1791]: time="2025-01-13T21:39:27.267012491Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:27.267055 containerd[1791]: time="2025-01-13T21:39:27.267025258Z" level=info msg="StopPodSandbox for \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\"" Jan 13 21:39:27.267055 containerd[1791]: time="2025-01-13T21:39:27.267048431Z" level=info msg="TearDown network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" successfully" Jan 13 21:39:27.267055 containerd[1791]: time="2025-01-13T21:39:27.267054440Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" returns successfully" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267114488Z" level=info msg="Ensure that sandbox 3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c in task-service has been cleanup successfully" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267193955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:5,}" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267204915Z" level=info msg="TearDown network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\" successfully" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267214881Z" level=info msg="StopPodSandbox for \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\" returns successfully" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267320795Z" level=info msg="StopPodSandbox for \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\"" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267357450Z" level=info msg="TearDown network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" successfully" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267363521Z" level=info msg="StopPodSandbox for \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" returns successfully" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267489885Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\"" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267521633Z" level=info msg="TearDown network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" successfully" Jan 13 21:39:27.267527 containerd[1791]: time="2025-01-13T21:39:27.267526835Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" returns successfully" Jan 13 21:39:27.267722 containerd[1791]: time="2025-01-13T21:39:27.267620299Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" Jan 13 21:39:27.267722 containerd[1791]: time="2025-01-13T21:39:27.267656543Z" level=info msg="TearDown network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" successfully" Jan 13 21:39:27.267722 containerd[1791]: time="2025-01-13T21:39:27.267664676Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" returns successfully" Jan 13 21:39:27.267796 kubelet[3268]: I0113 21:39:27.267654 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546" Jan 13 21:39:27.267819 containerd[1791]: time="2025-01-13T21:39:27.267778177Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:27.267840 containerd[1791]: time="2025-01-13T21:39:27.267821165Z" level=info msg="TearDown network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" successfully" Jan 13 21:39:27.267840 containerd[1791]: time="2025-01-13T21:39:27.267830415Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" returns successfully" Jan 13 21:39:27.267894 containerd[1791]: time="2025-01-13T21:39:27.267846254Z" level=info msg="StopPodSandbox for \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\"" Jan 13 21:39:27.267963 containerd[1791]: time="2025-01-13T21:39:27.267954482Z" level=info msg="Ensure that sandbox 90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546 in task-service has been cleanup successfully" Jan 13 21:39:27.268185 containerd[1791]: time="2025-01-13T21:39:27.268169059Z" level=info msg="TearDown network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\" successfully" Jan 13 21:39:27.268223 containerd[1791]: time="2025-01-13T21:39:27.268184373Z" level=info msg="StopPodSandbox for \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\" returns successfully" Jan 13 21:39:27.268261 systemd[1]: run-netns-cni\x2d6febeb65\x2d1354\x2dd7de\x2df064\x2dd4d0eb7b4cf2.mount: Deactivated successfully. Jan 13 21:39:27.268319 containerd[1791]: time="2025-01-13T21:39:27.268281146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:5,}" Jan 13 21:39:27.268336 systemd[1]: run-netns-cni\x2d55b3ec29\x2d0027\x2d5eae\x2d2818\x2d1eb03eceae83.mount: Deactivated successfully. Jan 13 21:39:27.268438 containerd[1791]: time="2025-01-13T21:39:27.268414947Z" level=info msg="StopPodSandbox for \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\"" Jan 13 21:39:27.268937 containerd[1791]: time="2025-01-13T21:39:27.268502867Z" level=info msg="TearDown network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" successfully" Jan 13 21:39:27.268937 containerd[1791]: time="2025-01-13T21:39:27.268540290Z" level=info msg="StopPodSandbox for \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" returns successfully" Jan 13 21:39:27.269132 containerd[1791]: time="2025-01-13T21:39:27.269120229Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\"" Jan 13 21:39:27.269409 containerd[1791]: time="2025-01-13T21:39:27.269375201Z" level=info msg="TearDown network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" successfully" Jan 13 21:39:27.269453 containerd[1791]: time="2025-01-13T21:39:27.269409614Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" returns successfully" Jan 13 21:39:27.269554 containerd[1791]: time="2025-01-13T21:39:27.269540986Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" Jan 13 21:39:27.269627 containerd[1791]: time="2025-01-13T21:39:27.269599974Z" level=info msg="TearDown network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" successfully" Jan 13 21:39:27.269648 containerd[1791]: time="2025-01-13T21:39:27.269629129Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" returns successfully" Jan 13 21:39:27.269757 containerd[1791]: time="2025-01-13T21:39:27.269744345Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:27.269820 containerd[1791]: time="2025-01-13T21:39:27.269798190Z" level=info msg="TearDown network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" successfully" Jan 13 21:39:27.269843 containerd[1791]: time="2025-01-13T21:39:27.269821533Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" returns successfully" Jan 13 21:39:27.270062 containerd[1791]: time="2025-01-13T21:39:27.270051247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:5,}" Jan 13 21:39:27.270439 kubelet[3268]: I0113 21:39:27.270428 3268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e" Jan 13 21:39:27.270727 containerd[1791]: time="2025-01-13T21:39:27.270713260Z" level=info msg="StopPodSandbox for \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\"" Jan 13 21:39:27.270825 containerd[1791]: time="2025-01-13T21:39:27.270816307Z" level=info msg="Ensure that sandbox e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e in task-service has been cleanup successfully" Jan 13 21:39:27.270841 systemd[1]: run-netns-cni\x2d907ac055\x2d8a4a\x2d2388\x2d55ab\x2dc43dffb1ff78.mount: Deactivated successfully. Jan 13 21:39:27.270920 containerd[1791]: time="2025-01-13T21:39:27.270900931Z" level=info msg="TearDown network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\" successfully" Jan 13 21:39:27.270940 containerd[1791]: time="2025-01-13T21:39:27.270920502Z" level=info msg="StopPodSandbox for \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\" returns successfully" Jan 13 21:39:27.270930 systemd[1]: run-netns-cni\x2d218d5878\x2d0f2f\x2dfcc0\x2d4499\x2d01a1abfb4bf2.mount: Deactivated successfully. Jan 13 21:39:27.271089 containerd[1791]: time="2025-01-13T21:39:27.271078652Z" level=info msg="StopPodSandbox for \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\"" Jan 13 21:39:27.271125 containerd[1791]: time="2025-01-13T21:39:27.271118108Z" level=info msg="TearDown network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" successfully" Jan 13 21:39:27.271151 containerd[1791]: time="2025-01-13T21:39:27.271125349Z" level=info msg="StopPodSandbox for \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" returns successfully" Jan 13 21:39:27.271252 containerd[1791]: time="2025-01-13T21:39:27.271243152Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\"" Jan 13 21:39:27.271289 containerd[1791]: time="2025-01-13T21:39:27.271281334Z" level=info msg="TearDown network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" successfully" Jan 13 21:39:27.271311 containerd[1791]: time="2025-01-13T21:39:27.271288372Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" returns successfully" Jan 13 21:39:27.271404 containerd[1791]: time="2025-01-13T21:39:27.271393202Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" Jan 13 21:39:27.271451 containerd[1791]: time="2025-01-13T21:39:27.271442419Z" level=info msg="TearDown network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" successfully" Jan 13 21:39:27.271480 containerd[1791]: time="2025-01-13T21:39:27.271452436Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" returns successfully" Jan 13 21:39:27.271585 containerd[1791]: time="2025-01-13T21:39:27.271574793Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:27.271636 containerd[1791]: time="2025-01-13T21:39:27.271626713Z" level=info msg="TearDown network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" successfully" Jan 13 21:39:27.271653 containerd[1791]: time="2025-01-13T21:39:27.271637209Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" returns successfully" Jan 13 21:39:27.271821 containerd[1791]: time="2025-01-13T21:39:27.271811895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:5,}" Jan 13 21:39:27.273820 systemd[1]: run-netns-cni\x2d9c45555c\x2d53ae\x2d9d1c\x2d4b2e\x2d09ec4465a939.mount: Deactivated successfully. Jan 13 21:39:27.277459 kubelet[3268]: I0113 21:39:27.277436 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-srrt9" podStartSLOduration=1.9062323 podStartE2EDuration="12.277398982s" podCreationTimestamp="2025-01-13 21:39:15 +0000 UTC" firstStartedPulling="2025-01-13 21:39:16.098947137 +0000 UTC m=+23.998646706" lastFinishedPulling="2025-01-13 21:39:26.470113816 +0000 UTC m=+34.369813388" observedRunningTime="2025-01-13 21:39:27.277112346 +0000 UTC m=+35.176811917" watchObservedRunningTime="2025-01-13 21:39:27.277398982 +0000 UTC m=+35.177098545" Jan 13 21:39:27.338607 systemd-networkd[1710]: cali1b2390a153e: Link UP Jan 13 21:39:27.338709 systemd-networkd[1710]: cali1b2390a153e: Gained carrier Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.285 [INFO][5654] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5654] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0 calico-apiserver-f4cfd48dc- calico-apiserver dac5f61b-c5c7-4a9f-8108-2532f35a9873 665 0 2025-01-13 21:39:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4cfd48dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.0-a-ed112912ac calico-apiserver-f4cfd48dc-rmn7p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b2390a153e [] []}} ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5654] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.315 [INFO][5773] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" HandleID="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.320 [INFO][5773] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" HandleID="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.0-a-ed112912ac", "pod":"calico-apiserver-f4cfd48dc-rmn7p", "timestamp":"2025-01-13 21:39:27.31582824 +0000 UTC"}, Hostname:"ci-4152.2.0-a-ed112912ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.320 [INFO][5773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.321 [INFO][5773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.321 [INFO][5773] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-ed112912ac' Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.321 [INFO][5773] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.323 [INFO][5773] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.326 [INFO][5773] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.326 [INFO][5773] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.327 [INFO][5773] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.328 [INFO][5773] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.328 [INFO][5773] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.330 [INFO][5773] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5773] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.1/26] block=192.168.17.0/26 handle="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5773] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.1/26] handle="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:39:27.348045 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5773] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.1/26] IPv6=[] ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" HandleID="k8s-pod-network.e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" Jan 13 21:39:27.348788 containerd[1791]: 2025-01-13 21:39:27.334 [INFO][5654] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0", GenerateName:"calico-apiserver-f4cfd48dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"dac5f61b-c5c7-4a9f-8108-2532f35a9873", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4cfd48dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"", Pod:"calico-apiserver-f4cfd48dc-rmn7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b2390a153e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.348788 containerd[1791]: 2025-01-13 21:39:27.334 [INFO][5654] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.1/32] ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" Jan 13 21:39:27.348788 containerd[1791]: 2025-01-13 21:39:27.334 [INFO][5654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b2390a153e ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" Jan 13 21:39:27.348788 containerd[1791]: 2025-01-13 21:39:27.338 [INFO][5654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" Jan 13 21:39:27.348788 containerd[1791]: 2025-01-13 21:39:27.338 [INFO][5654] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0", GenerateName:"calico-apiserver-f4cfd48dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"dac5f61b-c5c7-4a9f-8108-2532f35a9873", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4cfd48dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d", Pod:"calico-apiserver-f4cfd48dc-rmn7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b2390a153e", MAC:"62:aa:66:09:39:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.348788 containerd[1791]: 2025-01-13 21:39:27.344 [INFO][5654] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-rmn7p" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--rmn7p-eth0" Jan 13 21:39:27.354469 systemd-networkd[1710]: cali7a5d8679f5b: Link UP Jan 13 21:39:27.354597 systemd-networkd[1710]: cali7a5d8679f5b: Gained carrier Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.291 [INFO][5684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5684] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0 calico-kube-controllers-7b7795d5b4- calico-system 2fc98b87-986b-42c4-bb81-8550837ec10b 660 0 2025-01-13 21:39:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b7795d5b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152.2.0-a-ed112912ac calico-kube-controllers-7b7795d5b4-82g4w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7a5d8679f5b [] []}} ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5684] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.316 [INFO][5771] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" HandleID="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.321 [INFO][5771] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" HandleID="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002959d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.0-a-ed112912ac", "pod":"calico-kube-controllers-7b7795d5b4-82g4w", "timestamp":"2025-01-13 21:39:27.316218899 +0000 UTC"}, Hostname:"ci-4152.2.0-a-ed112912ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.321 [INFO][5771] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5771] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5771] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-ed112912ac' Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.334 [INFO][5771] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.338 [INFO][5771] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.340 [INFO][5771] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.341 [INFO][5771] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.344 [INFO][5771] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.344 [INFO][5771] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.345 [INFO][5771] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213 Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.348 [INFO][5771] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.352 [INFO][5771] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.2/26] block=192.168.17.0/26 handle="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.352 [INFO][5771] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.2/26] handle="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.352 [INFO][5771] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:39:27.359314 containerd[1791]: 2025-01-13 21:39:27.352 [INFO][5771] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.2/26] IPv6=[] ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" HandleID="k8s-pod-network.23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" Jan 13 21:39:27.359736 containerd[1791]: 2025-01-13 21:39:27.353 [INFO][5684] cni-plugin/k8s.go 386: Populated endpoint ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0", GenerateName:"calico-kube-controllers-7b7795d5b4-", Namespace:"calico-system", SelfLink:"", UID:"2fc98b87-986b-42c4-bb81-8550837ec10b", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b7795d5b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"", Pod:"calico-kube-controllers-7b7795d5b4-82g4w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a5d8679f5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.359736 containerd[1791]: 2025-01-13 21:39:27.353 [INFO][5684] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.2/32] ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" Jan 13 21:39:27.359736 containerd[1791]: 2025-01-13 21:39:27.353 [INFO][5684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a5d8679f5b ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" Jan 13 21:39:27.359736 containerd[1791]: 2025-01-13 21:39:27.354 [INFO][5684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" Jan 13 21:39:27.359736 containerd[1791]: 2025-01-13 21:39:27.354 [INFO][5684] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0", GenerateName:"calico-kube-controllers-7b7795d5b4-", Namespace:"calico-system", SelfLink:"", UID:"2fc98b87-986b-42c4-bb81-8550837ec10b", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b7795d5b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213", Pod:"calico-kube-controllers-7b7795d5b4-82g4w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7a5d8679f5b", MAC:"66:94:e4:94:2b:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.359736 containerd[1791]: 2025-01-13 21:39:27.358 [INFO][5684] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213" Namespace="calico-system" Pod="calico-kube-controllers-7b7795d5b4-82g4w" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--kube--controllers--7b7795d5b4--82g4w-eth0" Jan 13 21:39:27.361278 containerd[1791]: time="2025-01-13T21:39:27.361046461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:27.361278 containerd[1791]: time="2025-01-13T21:39:27.361267694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:27.361503 containerd[1791]: time="2025-01-13T21:39:27.361276114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.361503 containerd[1791]: time="2025-01-13T21:39:27.361322607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.368959 containerd[1791]: time="2025-01-13T21:39:27.368917257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:27.368959 containerd[1791]: time="2025-01-13T21:39:27.368947385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:27.368959 containerd[1791]: time="2025-01-13T21:39:27.368954335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.369066 containerd[1791]: time="2025-01-13T21:39:27.369008004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.370511 systemd-networkd[1710]: cali4959ed4311b: Link UP Jan 13 21:39:27.370610 systemd-networkd[1710]: cali4959ed4311b: Gained carrier Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.289 [INFO][5672] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5672] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0 calico-apiserver-f4cfd48dc- calico-apiserver 547ff756-0fe2-4d76-8a18-b23758fbd4a2 662 0 2025-01-13 21:39:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4cfd48dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.0-a-ed112912ac calico-apiserver-f4cfd48dc-t2q7n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4959ed4311b [] []}} ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5672] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.317 [INFO][5770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" HandleID="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.321 [INFO][5770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" HandleID="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5d60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.0-a-ed112912ac", "pod":"calico-apiserver-f4cfd48dc-t2q7n", "timestamp":"2025-01-13 21:39:27.317511369 +0000 UTC"}, Hostname:"ci-4152.2.0-a-ed112912ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.321 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.352 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.352 [INFO][5770] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-ed112912ac' Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.354 [INFO][5770] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.356 [INFO][5770] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.360 [INFO][5770] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.361 [INFO][5770] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.362 [INFO][5770] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.362 [INFO][5770] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.363 [INFO][5770] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.365 [INFO][5770] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.368 [INFO][5770] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.3/26] block=192.168.17.0/26 handle="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.368 [INFO][5770] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.3/26] handle="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.368 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:39:27.376237 containerd[1791]: 2025-01-13 21:39:27.368 [INFO][5770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.3/26] IPv6=[] ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" HandleID="k8s-pod-network.46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Workload="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" Jan 13 21:39:27.376653 containerd[1791]: 2025-01-13 21:39:27.369 [INFO][5672] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0", GenerateName:"calico-apiserver-f4cfd48dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"547ff756-0fe2-4d76-8a18-b23758fbd4a2", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4cfd48dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"", Pod:"calico-apiserver-f4cfd48dc-t2q7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4959ed4311b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.376653 containerd[1791]: 2025-01-13 21:39:27.369 [INFO][5672] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.3/32] ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" Jan 13 21:39:27.376653 containerd[1791]: 2025-01-13 21:39:27.369 [INFO][5672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4959ed4311b ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" Jan 13 21:39:27.376653 containerd[1791]: 2025-01-13 21:39:27.370 [INFO][5672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" Jan 13 21:39:27.376653 containerd[1791]: 2025-01-13 21:39:27.370 [INFO][5672] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0", GenerateName:"calico-apiserver-f4cfd48dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"547ff756-0fe2-4d76-8a18-b23758fbd4a2", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4cfd48dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e", Pod:"calico-apiserver-f4cfd48dc-t2q7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4959ed4311b", MAC:"4e:ba:f6:24:fd:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.376653 containerd[1791]: 2025-01-13 21:39:27.375 [INFO][5672] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e" Namespace="calico-apiserver" Pod="calico-apiserver-f4cfd48dc-t2q7n" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-calico--apiserver--f4cfd48dc--t2q7n-eth0" Jan 13 21:39:27.378131 systemd[1]: Started cri-containerd-e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d.scope - libcontainer container e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d. Jan 13 21:39:27.380466 systemd[1]: Started cri-containerd-23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213.scope - libcontainer container 23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213. Jan 13 21:39:27.384647 systemd-networkd[1710]: calia52e271b58b: Link UP Jan 13 21:39:27.384763 systemd-networkd[1710]: calia52e271b58b: Gained carrier Jan 13 21:39:27.386205 containerd[1791]: time="2025-01-13T21:39:27.386141165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:27.386205 containerd[1791]: time="2025-01-13T21:39:27.386176097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:27.386205 containerd[1791]: time="2025-01-13T21:39:27.386184321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.386491 containerd[1791]: time="2025-01-13T21:39:27.386463879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.291 [INFO][5691] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5691] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0 csi-node-driver- calico-system d53850a4-8d71-4e78-b476-3e30d51488fd 596 0 2025-01-13 21:39:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152.2.0-a-ed112912ac csi-node-driver-nhvsz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia52e271b58b [] []}} ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.299 [INFO][5691] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.316 [INFO][5776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" HandleID="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Workload="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.322 [INFO][5776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" HandleID="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Workload="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.0-a-ed112912ac", "pod":"csi-node-driver-nhvsz", "timestamp":"2025-01-13 21:39:27.31630875 +0000 UTC"}, Hostname:"ci-4152.2.0-a-ed112912ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.322 [INFO][5776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.368 [INFO][5776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.368 [INFO][5776] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-ed112912ac' Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.370 [INFO][5776] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.372 [INFO][5776] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.374 [INFO][5776] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.375 [INFO][5776] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.376 [INFO][5776] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.376 [INFO][5776] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.377 [INFO][5776] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29 Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.379 [INFO][5776] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.382 [INFO][5776] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.4/26] block=192.168.17.0/26 handle="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.382 [INFO][5776] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.4/26] handle="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.382 [INFO][5776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:39:27.390057 containerd[1791]: 2025-01-13 21:39:27.382 [INFO][5776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.4/26] IPv6=[] ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" HandleID="k8s-pod-network.79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Workload="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" Jan 13 21:39:27.390581 containerd[1791]: 2025-01-13 21:39:27.383 [INFO][5691] cni-plugin/k8s.go 386: Populated endpoint ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d53850a4-8d71-4e78-b476-3e30d51488fd", ResourceVersion:"596", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"", Pod:"csi-node-driver-nhvsz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia52e271b58b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.390581 containerd[1791]: 2025-01-13 21:39:27.383 [INFO][5691] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.4/32] ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" Jan 13 21:39:27.390581 containerd[1791]: 2025-01-13 21:39:27.383 [INFO][5691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia52e271b58b ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" Jan 13 21:39:27.390581 containerd[1791]: 2025-01-13 21:39:27.384 [INFO][5691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" Jan 13 21:39:27.390581 containerd[1791]: 2025-01-13 21:39:27.384 [INFO][5691] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d53850a4-8d71-4e78-b476-3e30d51488fd", ResourceVersion:"596", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29", Pod:"csi-node-driver-nhvsz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia52e271b58b", MAC:"06:9e:7f:08:d8:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.390581 containerd[1791]: 2025-01-13 21:39:27.389 [INFO][5691] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29" Namespace="calico-system" Pod="csi-node-driver-nhvsz" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-csi--node--driver--nhvsz-eth0" Jan 13 21:39:27.393529 systemd[1]: Started cri-containerd-46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e.scope - libcontainer container 46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e. Jan 13 21:39:27.400436 containerd[1791]: time="2025-01-13T21:39:27.400013034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:27.400436 containerd[1791]: time="2025-01-13T21:39:27.400345248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:27.400436 containerd[1791]: time="2025-01-13T21:39:27.400353836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.400436 containerd[1791]: time="2025-01-13T21:39:27.400397785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.403546 systemd-networkd[1710]: calic85bab3f1ba: Link UP Jan 13 21:39:27.403740 systemd-networkd[1710]: calic85bab3f1ba: Gained carrier Jan 13 21:39:27.405161 containerd[1791]: time="2025-01-13T21:39:27.405134119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-rmn7p,Uid:dac5f61b-c5c7-4a9f-8108-2532f35a9873,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d\"" Jan 13 21:39:27.406334 containerd[1791]: time="2025-01-13T21:39:27.406301585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:39:27.407198 containerd[1791]: time="2025-01-13T21:39:27.407176713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7795d5b4-82g4w,Uid:2fc98b87-986b-42c4-bb81-8550837ec10b,Namespace:calico-system,Attempt:5,} returns sandbox id \"23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213\"" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.304 [INFO][5743] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.312 [INFO][5743] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0 coredns-76f75df574- kube-system 3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28 664 0 2025-01-13 21:39:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.0-a-ed112912ac coredns-76f75df574-dqn4h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic85bab3f1ba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.312 [INFO][5743] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.329 [INFO][5826] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" HandleID="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Workload="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5826] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" HandleID="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Workload="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000298750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.0-a-ed112912ac", "pod":"coredns-76f75df574-dqn4h", "timestamp":"2025-01-13 21:39:27.32909727 +0000 UTC"}, Hostname:"ci-4152.2.0-a-ed112912ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.333 [INFO][5826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.382 [INFO][5826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.382 [INFO][5826] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-ed112912ac' Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.384 [INFO][5826] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.387 [INFO][5826] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.390 [INFO][5826] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.391 [INFO][5826] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.393 [INFO][5826] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.393 [INFO][5826] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.394 [INFO][5826] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472 Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.397 [INFO][5826] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.400 [INFO][5826] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.5/26] block=192.168.17.0/26 handle="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.401 [INFO][5826] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.5/26] handle="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.401 [INFO][5826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:39:27.409949 containerd[1791]: 2025-01-13 21:39:27.401 [INFO][5826] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.5/26] IPv6=[] ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" HandleID="k8s-pod-network.e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Workload="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" Jan 13 21:39:27.410377 containerd[1791]: 2025-01-13 21:39:27.402 [INFO][5743] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"", Pod:"coredns-76f75df574-dqn4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic85bab3f1ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.410377 containerd[1791]: 2025-01-13 21:39:27.402 [INFO][5743] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.5/32] ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" Jan 13 21:39:27.410377 containerd[1791]: 2025-01-13 21:39:27.402 [INFO][5743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic85bab3f1ba ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" Jan 13 21:39:27.410377 containerd[1791]: 2025-01-13 21:39:27.403 [INFO][5743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" Jan 13 21:39:27.410377 containerd[1791]: 2025-01-13 21:39:27.403 [INFO][5743] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472", Pod:"coredns-76f75df574-dqn4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic85bab3f1ba", MAC:"12:ef:b0:4a:9e:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.410377 containerd[1791]: 2025-01-13 21:39:27.409 [INFO][5743] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472" Namespace="kube-system" Pod="coredns-76f75df574-dqn4h" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--dqn4h-eth0" Jan 13 21:39:27.414148 systemd[1]: Started cri-containerd-79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29.scope - libcontainer container 79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29. Jan 13 21:39:27.420240 containerd[1791]: time="2025-01-13T21:39:27.420119527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:27.420240 containerd[1791]: time="2025-01-13T21:39:27.420153558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:27.420240 containerd[1791]: time="2025-01-13T21:39:27.420160374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.420240 containerd[1791]: time="2025-01-13T21:39:27.420200098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.421311 systemd-networkd[1710]: cali56879e4707b: Link UP Jan 13 21:39:27.421455 systemd-networkd[1710]: cali56879e4707b: Gained carrier Jan 13 21:39:27.421540 containerd[1791]: time="2025-01-13T21:39:27.421508498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4cfd48dc-t2q7n,Uid:547ff756-0fe2-4d76-8a18-b23758fbd4a2,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e\"" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.314 [INFO][5758] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.320 [INFO][5758] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0 coredns-76f75df574- kube-system ec5b66ae-6390-4504-928f-0cd85519c73d 663 0 2025-01-13 21:39:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.0-a-ed112912ac coredns-76f75df574-svkr6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56879e4707b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.320 [INFO][5758] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.334 [INFO][5845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" HandleID="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Workload="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.339 [INFO][5845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" HandleID="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Workload="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000228b30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.0-a-ed112912ac", "pod":"coredns-76f75df574-svkr6", "timestamp":"2025-01-13 21:39:27.334789855 +0000 UTC"}, Hostname:"ci-4152.2.0-a-ed112912ac", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.339 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.401 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.401 [INFO][5845] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-ed112912ac' Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.403 [INFO][5845] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.406 [INFO][5845] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.409 [INFO][5845] ipam/ipam.go 489: Trying affinity for 192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.410 [INFO][5845] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.412 [INFO][5845] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.0/26 host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.412 [INFO][5845] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.0/26 handle="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.412 [INFO][5845] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917 Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.415 [INFO][5845] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.0/26 handle="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.419 [INFO][5845] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.6/26] block=192.168.17.0/26 handle="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.419 [INFO][5845] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.6/26] handle="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" host="ci-4152.2.0-a-ed112912ac" Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.419 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:39:27.426646 containerd[1791]: 2025-01-13 21:39:27.419 [INFO][5845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.6/26] IPv6=[] ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" HandleID="k8s-pod-network.236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Workload="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" Jan 13 21:39:27.427105 containerd[1791]: 2025-01-13 21:39:27.420 [INFO][5758] cni-plugin/k8s.go 386: Populated endpoint ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ec5b66ae-6390-4504-928f-0cd85519c73d", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"", Pod:"coredns-76f75df574-svkr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56879e4707b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.427105 containerd[1791]: 2025-01-13 21:39:27.420 [INFO][5758] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.6/32] ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" Jan 13 21:39:27.427105 containerd[1791]: 2025-01-13 21:39:27.420 [INFO][5758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56879e4707b ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" Jan 13 21:39:27.427105 containerd[1791]: 2025-01-13 21:39:27.421 [INFO][5758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" Jan 13 21:39:27.427105 containerd[1791]: 2025-01-13 21:39:27.421 [INFO][5758] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ec5b66ae-6390-4504-928f-0cd85519c73d", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-ed112912ac", ContainerID:"236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917", Pod:"coredns-76f75df574-svkr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56879e4707b", MAC:"da:03:78:fd:f7:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:39:27.427105 containerd[1791]: 2025-01-13 21:39:27.425 [INFO][5758] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917" Namespace="kube-system" Pod="coredns-76f75df574-svkr6" WorkloadEndpoint="ci--4152.2.0--a--ed112912ac-k8s-coredns--76f75df574--svkr6-eth0" Jan 13 21:39:27.427792 systemd[1]: Started cri-containerd-e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472.scope - libcontainer container e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472. Jan 13 21:39:27.427938 containerd[1791]: time="2025-01-13T21:39:27.427916538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhvsz,Uid:d53850a4-8d71-4e78-b476-3e30d51488fd,Namespace:calico-system,Attempt:5,} returns sandbox id \"79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29\"" Jan 13 21:39:27.436250 containerd[1791]: time="2025-01-13T21:39:27.436180858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:27.436250 containerd[1791]: time="2025-01-13T21:39:27.436211245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:27.436250 containerd[1791]: time="2025-01-13T21:39:27.436218187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.436352 containerd[1791]: time="2025-01-13T21:39:27.436256691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:27.442208 systemd[1]: Started cri-containerd-236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917.scope - libcontainer container 236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917. Jan 13 21:39:27.450997 containerd[1791]: time="2025-01-13T21:39:27.450976572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqn4h,Uid:3bf9b435-fe1c-48eb-8f0f-0b2ec937fa28,Namespace:kube-system,Attempt:5,} returns sandbox id \"e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472\"" Jan 13 21:39:27.452202 containerd[1791]: time="2025-01-13T21:39:27.452188464Z" level=info msg="CreateContainer within sandbox \"e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:39:27.457055 containerd[1791]: time="2025-01-13T21:39:27.457018208Z" level=info msg="CreateContainer within sandbox \"e8bbbbb465db80d27e39ada65fb3e1c4f0166930406244b48288a9545c694472\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53568ace467ce25a41f87f521094fa8c1e05f97226a8c95a28cbf32a7bd7603f\"" Jan 13 21:39:27.457209 containerd[1791]: time="2025-01-13T21:39:27.457196223Z" level=info msg="StartContainer for \"53568ace467ce25a41f87f521094fa8c1e05f97226a8c95a28cbf32a7bd7603f\"" Jan 13 21:39:27.464666 containerd[1791]: time="2025-01-13T21:39:27.464643216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svkr6,Uid:ec5b66ae-6390-4504-928f-0cd85519c73d,Namespace:kube-system,Attempt:5,} returns sandbox id \"236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917\"" Jan 13 21:39:27.465939 containerd[1791]: time="2025-01-13T21:39:27.465927244Z" level=info msg="CreateContainer within sandbox \"236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:39:27.470543 containerd[1791]: time="2025-01-13T21:39:27.470498558Z" level=info msg="CreateContainer within sandbox \"236109e126021c160b23de0c4b8ca0e2e33975493a0c390951e79dd5549cb917\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1819a384f2d8eadb68d780d67fc813327ea134695af535e2f9a3db429dc85779\"" Jan 13 21:39:27.470692 containerd[1791]: time="2025-01-13T21:39:27.470645454Z" level=info msg="StartContainer for \"1819a384f2d8eadb68d780d67fc813327ea134695af535e2f9a3db429dc85779\"" Jan 13 21:39:27.477191 systemd[1]: Started cri-containerd-53568ace467ce25a41f87f521094fa8c1e05f97226a8c95a28cbf32a7bd7603f.scope - libcontainer container 53568ace467ce25a41f87f521094fa8c1e05f97226a8c95a28cbf32a7bd7603f. Jan 13 21:39:27.480472 systemd[1]: Started cri-containerd-1819a384f2d8eadb68d780d67fc813327ea134695af535e2f9a3db429dc85779.scope - libcontainer container 1819a384f2d8eadb68d780d67fc813327ea134695af535e2f9a3db429dc85779. Jan 13 21:39:27.507497 containerd[1791]: time="2025-01-13T21:39:27.507463468Z" level=info msg="StartContainer for \"53568ace467ce25a41f87f521094fa8c1e05f97226a8c95a28cbf32a7bd7603f\" returns successfully" Jan 13 21:39:27.507592 containerd[1791]: time="2025-01-13T21:39:27.507579941Z" level=info msg="StartContainer for \"1819a384f2d8eadb68d780d67fc813327ea134695af535e2f9a3db429dc85779\" returns successfully" Jan 13 21:39:27.780060 kernel: bpftool[6422]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:39:27.951432 systemd-networkd[1710]: vxlan.calico: Link UP Jan 13 21:39:27.951437 systemd-networkd[1710]: vxlan.calico: Gained carrier Jan 13 21:39:28.306399 kubelet[3268]: I0113 21:39:28.306353 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:39:28.317988 kubelet[3268]: I0113 21:39:28.317966 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dqn4h" podStartSLOduration=22.317934603 podStartE2EDuration="22.317934603s" podCreationTimestamp="2025-01-13 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:39:28.317675129 +0000 UTC m=+36.217374695" watchObservedRunningTime="2025-01-13 21:39:28.317934603 +0000 UTC m=+36.217634165" Jan 13 21:39:28.327309 kubelet[3268]: I0113 21:39:28.327287 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-svkr6" podStartSLOduration=22.327260136 podStartE2EDuration="22.327260136s" podCreationTimestamp="2025-01-13 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:39:28.32713608 +0000 UTC m=+36.226835647" watchObservedRunningTime="2025-01-13 21:39:28.327260136 +0000 UTC m=+36.226959698" Jan 13 21:39:28.545332 systemd-networkd[1710]: cali56879e4707b: Gained IPv6LL Jan 13 21:39:28.546173 systemd-networkd[1710]: cali1b2390a153e: Gained IPv6LL Jan 13 21:39:28.610087 systemd-networkd[1710]: cali4959ed4311b: Gained IPv6LL Jan 13 21:39:28.738280 systemd-networkd[1710]: calia52e271b58b: Gained IPv6LL Jan 13 21:39:28.801307 systemd-networkd[1710]: calic85bab3f1ba: Gained IPv6LL Jan 13 21:39:28.929141 systemd-networkd[1710]: cali7a5d8679f5b: Gained IPv6LL Jan 13 21:39:29.092903 containerd[1791]: time="2025-01-13T21:39:29.092849433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:29.093122 containerd[1791]: time="2025-01-13T21:39:29.092962306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:39:29.093335 containerd[1791]: time="2025-01-13T21:39:29.093296266Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:29.094433 containerd[1791]: time="2025-01-13T21:39:29.094389369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:29.094834 containerd[1791]: time="2025-01-13T21:39:29.094793002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.688465133s" Jan 13 21:39:29.094834 containerd[1791]: time="2025-01-13T21:39:29.094809195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:39:29.095125 containerd[1791]: time="2025-01-13T21:39:29.095083561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:39:29.095780 containerd[1791]: time="2025-01-13T21:39:29.095744186Z" level=info msg="CreateContainer within sandbox \"e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:39:29.100141 containerd[1791]: time="2025-01-13T21:39:29.100085228Z" level=info msg="CreateContainer within sandbox \"e0cf45d74925c197ee486a75db6051e1ce51fdd20573b804dabf23c3407b1c4d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0cd624145bc400596b453d6e9c99c95cf463dd28e06748d6861e9259cb15bc95\"" Jan 13 21:39:29.100362 containerd[1791]: time="2025-01-13T21:39:29.100312327Z" level=info msg="StartContainer for \"0cd624145bc400596b453d6e9c99c95cf463dd28e06748d6861e9259cb15bc95\"" Jan 13 21:39:29.126192 systemd[1]: Started cri-containerd-0cd624145bc400596b453d6e9c99c95cf463dd28e06748d6861e9259cb15bc95.scope - libcontainer container 0cd624145bc400596b453d6e9c99c95cf463dd28e06748d6861e9259cb15bc95. Jan 13 21:39:29.151943 containerd[1791]: time="2025-01-13T21:39:29.151913370Z" level=info msg="StartContainer for \"0cd624145bc400596b453d6e9c99c95cf463dd28e06748d6861e9259cb15bc95\" returns successfully" Jan 13 21:39:29.315467 kubelet[3268]: I0113 21:39:29.315434 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f4cfd48dc-rmn7p" podStartSLOduration=12.626319496 podStartE2EDuration="14.315390807s" podCreationTimestamp="2025-01-13 21:39:15 +0000 UTC" firstStartedPulling="2025-01-13 21:39:27.405924959 +0000 UTC m=+35.305624534" lastFinishedPulling="2025-01-13 21:39:29.094996271 +0000 UTC m=+36.994695845" observedRunningTime="2025-01-13 21:39:29.31538866 +0000 UTC m=+37.215088233" watchObservedRunningTime="2025-01-13 21:39:29.315390807 +0000 UTC m=+37.215090380" Jan 13 21:39:29.441121 systemd-networkd[1710]: vxlan.calico: Gained IPv6LL Jan 13 21:39:30.310821 kubelet[3268]: I0113 21:39:30.310798 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:39:30.748591 containerd[1791]: time="2025-01-13T21:39:30.748568178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:30.748813 containerd[1791]: time="2025-01-13T21:39:30.748794267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:39:30.749184 containerd[1791]: time="2025-01-13T21:39:30.749144578Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:30.750052 containerd[1791]: time="2025-01-13T21:39:30.750007987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:30.750467 containerd[1791]: time="2025-01-13T21:39:30.750424101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.655323197s" Jan 13 21:39:30.750467 containerd[1791]: time="2025-01-13T21:39:30.750440757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:39:30.750769 containerd[1791]: time="2025-01-13T21:39:30.750723523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:39:30.753666 containerd[1791]: time="2025-01-13T21:39:30.753646814Z" level=info msg="CreateContainer within sandbox \"23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:39:30.758508 containerd[1791]: time="2025-01-13T21:39:30.758488143Z" level=info msg="CreateContainer within sandbox \"23531ca7b3187fd124a9e1be0d0af827197222a1e75e39a8b8c55cc5e80a2213\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e8443f1f0c4e1b37d5cc95ac372af2ded485903e33721039a38ab95990a144d0\"" Jan 13 21:39:30.758715 containerd[1791]: time="2025-01-13T21:39:30.758690375Z" level=info msg="StartContainer for \"e8443f1f0c4e1b37d5cc95ac372af2ded485903e33721039a38ab95990a144d0\"" Jan 13 21:39:30.783193 systemd[1]: Started cri-containerd-e8443f1f0c4e1b37d5cc95ac372af2ded485903e33721039a38ab95990a144d0.scope - libcontainer container e8443f1f0c4e1b37d5cc95ac372af2ded485903e33721039a38ab95990a144d0. Jan 13 21:39:30.806064 containerd[1791]: time="2025-01-13T21:39:30.806042412Z" level=info msg="StartContainer for \"e8443f1f0c4e1b37d5cc95ac372af2ded485903e33721039a38ab95990a144d0\" returns successfully" Jan 13 21:39:31.127190 containerd[1791]: time="2025-01-13T21:39:31.127083206Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:31.127287 containerd[1791]: time="2025-01-13T21:39:31.127239458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:39:31.129014 containerd[1791]: time="2025-01-13T21:39:31.128973834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 378.233752ms" Jan 13 21:39:31.129014 containerd[1791]: time="2025-01-13T21:39:31.129012866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:39:31.129426 containerd[1791]: time="2025-01-13T21:39:31.129374343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:39:31.130341 containerd[1791]: time="2025-01-13T21:39:31.130268125Z" level=info msg="CreateContainer within sandbox \"46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:39:31.134921 containerd[1791]: time="2025-01-13T21:39:31.134906374Z" level=info msg="CreateContainer within sandbox \"46c3f02c2b26e0cb18546cddd70f43984252ef2d6a763bcca9949182aad8147e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"857656d3eb3dcffccb8ef1a9706672105eac30dceb1eb242bcff0ce8407e1f39\"" Jan 13 21:39:31.135214 containerd[1791]: time="2025-01-13T21:39:31.135201100Z" level=info msg="StartContainer for \"857656d3eb3dcffccb8ef1a9706672105eac30dceb1eb242bcff0ce8407e1f39\"" Jan 13 21:39:31.164257 systemd[1]: Started cri-containerd-857656d3eb3dcffccb8ef1a9706672105eac30dceb1eb242bcff0ce8407e1f39.scope - libcontainer container 857656d3eb3dcffccb8ef1a9706672105eac30dceb1eb242bcff0ce8407e1f39. Jan 13 21:39:31.198052 containerd[1791]: time="2025-01-13T21:39:31.198019702Z" level=info msg="StartContainer for \"857656d3eb3dcffccb8ef1a9706672105eac30dceb1eb242bcff0ce8407e1f39\" returns successfully" Jan 13 21:39:31.331853 kubelet[3268]: I0113 21:39:31.331831 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f4cfd48dc-t2q7n" podStartSLOduration=12.62479831 podStartE2EDuration="16.331806018s" podCreationTimestamp="2025-01-13 21:39:15 +0000 UTC" firstStartedPulling="2025-01-13 21:39:27.422220988 +0000 UTC m=+35.321920553" lastFinishedPulling="2025-01-13 21:39:31.12922869 +0000 UTC m=+39.028928261" observedRunningTime="2025-01-13 21:39:31.331523208 +0000 UTC m=+39.231222774" watchObservedRunningTime="2025-01-13 21:39:31.331806018 +0000 UTC m=+39.231505580" Jan 13 21:39:31.336590 kubelet[3268]: I0113 21:39:31.336570 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b7795d5b4-82g4w" podStartSLOduration=12.993678099 podStartE2EDuration="16.336541198s" podCreationTimestamp="2025-01-13 21:39:15 +0000 UTC" firstStartedPulling="2025-01-13 21:39:27.40775908 +0000 UTC m=+35.307458650" lastFinishedPulling="2025-01-13 21:39:30.750622184 +0000 UTC m=+38.650321749" observedRunningTime="2025-01-13 21:39:31.336293948 +0000 UTC m=+39.235993516" watchObservedRunningTime="2025-01-13 21:39:31.336541198 +0000 UTC m=+39.236240760" Jan 13 21:39:32.318461 kubelet[3268]: I0113 21:39:32.318388 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:39:32.476139 containerd[1791]: time="2025-01-13T21:39:32.476111439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:32.476412 containerd[1791]: time="2025-01-13T21:39:32.476304815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:39:32.476617 containerd[1791]: time="2025-01-13T21:39:32.476603652Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:32.477981 containerd[1791]: time="2025-01-13T21:39:32.477967024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:32.478265 containerd[1791]: time="2025-01-13T21:39:32.478223991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.348817219s" Jan 13 21:39:32.478265 containerd[1791]: time="2025-01-13T21:39:32.478239602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:39:32.479208 containerd[1791]: time="2025-01-13T21:39:32.479196455Z" level=info msg="CreateContainer within sandbox \"79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:39:32.484765 containerd[1791]: time="2025-01-13T21:39:32.484722799Z" level=info msg="CreateContainer within sandbox \"79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0e866d6e410030169320583e06b69f846fb91932fbaa2909eeb9183f7cf12f1a\"" Jan 13 21:39:32.484987 containerd[1791]: time="2025-01-13T21:39:32.484973815Z" level=info msg="StartContainer for \"0e866d6e410030169320583e06b69f846fb91932fbaa2909eeb9183f7cf12f1a\"" Jan 13 21:39:32.510493 systemd[1]: Started cri-containerd-0e866d6e410030169320583e06b69f846fb91932fbaa2909eeb9183f7cf12f1a.scope - libcontainer container 0e866d6e410030169320583e06b69f846fb91932fbaa2909eeb9183f7cf12f1a. Jan 13 21:39:32.571728 containerd[1791]: time="2025-01-13T21:39:32.571611503Z" level=info msg="StartContainer for \"0e866d6e410030169320583e06b69f846fb91932fbaa2909eeb9183f7cf12f1a\" returns successfully" Jan 13 21:39:32.572635 containerd[1791]: time="2025-01-13T21:39:32.572609911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:39:33.819460 containerd[1791]: time="2025-01-13T21:39:33.819405073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:33.819685 containerd[1791]: time="2025-01-13T21:39:33.819627718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:39:33.819973 containerd[1791]: time="2025-01-13T21:39:33.819932210Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:33.820904 containerd[1791]: time="2025-01-13T21:39:33.820861285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:33.821321 containerd[1791]: time="2025-01-13T21:39:33.821299041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.248662419s" Jan 13 21:39:33.821321 containerd[1791]: time="2025-01-13T21:39:33.821316540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:39:33.822315 containerd[1791]: time="2025-01-13T21:39:33.822305075Z" level=info msg="CreateContainer within sandbox \"79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:39:33.827264 containerd[1791]: time="2025-01-13T21:39:33.827221320Z" level=info msg="CreateContainer within sandbox \"79b718fc6c75cd26f14a6f0a3a8b28ee303416233d438bd4a54a70b5c51d3b29\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"61eb76e78b4ddc9ec22a4ba57ee9178c3285755a76e6b8a27f528f664bfa66cf\"" Jan 13 21:39:33.827540 containerd[1791]: time="2025-01-13T21:39:33.827481227Z" level=info msg="StartContainer for \"61eb76e78b4ddc9ec22a4ba57ee9178c3285755a76e6b8a27f528f664bfa66cf\"" Jan 13 21:39:33.852320 systemd[1]: Started cri-containerd-61eb76e78b4ddc9ec22a4ba57ee9178c3285755a76e6b8a27f528f664bfa66cf.scope - libcontainer container 61eb76e78b4ddc9ec22a4ba57ee9178c3285755a76e6b8a27f528f664bfa66cf. Jan 13 21:39:33.867028 containerd[1791]: time="2025-01-13T21:39:33.866997415Z" level=info msg="StartContainer for \"61eb76e78b4ddc9ec22a4ba57ee9178c3285755a76e6b8a27f528f664bfa66cf\" returns successfully" Jan 13 21:39:34.197787 kubelet[3268]: I0113 21:39:34.197674 3268 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:39:34.197787 kubelet[3268]: I0113 21:39:34.197766 3268 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:39:34.362650 kubelet[3268]: I0113 21:39:34.362540 3268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-nhvsz" podStartSLOduration=12.969358427 podStartE2EDuration="19.362437937s" podCreationTimestamp="2025-01-13 21:39:15 +0000 UTC" firstStartedPulling="2025-01-13 21:39:27.428414995 +0000 UTC m=+35.328114560" lastFinishedPulling="2025-01-13 21:39:33.821494505 +0000 UTC m=+41.721194070" observedRunningTime="2025-01-13 21:39:34.361182424 +0000 UTC m=+42.260882066" watchObservedRunningTime="2025-01-13 21:39:34.362437937 +0000 UTC m=+42.262137555" Jan 13 21:39:39.300600 kubelet[3268]: I0113 21:39:39.300519 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:39:52.147836 containerd[1791]: time="2025-01-13T21:39:52.147767903Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:52.148201 containerd[1791]: time="2025-01-13T21:39:52.147838593Z" level=info msg="TearDown network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" successfully" Jan 13 21:39:52.148201 containerd[1791]: time="2025-01-13T21:39:52.147864704Z" level=info msg="StopPodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" returns successfully" Jan 13 21:39:52.148201 containerd[1791]: time="2025-01-13T21:39:52.148063167Z" level=info msg="RemovePodSandbox for \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:52.148201 containerd[1791]: time="2025-01-13T21:39:52.148077411Z" level=info msg="Forcibly stopping sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\"" Jan 13 21:39:52.148201 containerd[1791]: time="2025-01-13T21:39:52.148112673Z" level=info msg="TearDown network for sandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" successfully" Jan 13 21:39:52.149450 containerd[1791]: time="2025-01-13T21:39:52.149406996Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.149450 containerd[1791]: time="2025-01-13T21:39:52.149427471Z" level=info msg="RemovePodSandbox \"41cbdab38d19a5607c0a91ad1e499acba15024c3c83bd40a766a2fe0db0aba2f\" returns successfully" Jan 13 21:39:52.149752 containerd[1791]: time="2025-01-13T21:39:52.149705269Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" Jan 13 21:39:52.149791 containerd[1791]: time="2025-01-13T21:39:52.149768330Z" level=info msg="TearDown network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" successfully" Jan 13 21:39:52.149791 containerd[1791]: time="2025-01-13T21:39:52.149775675Z" level=info msg="StopPodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" returns successfully" Jan 13 21:39:52.149924 containerd[1791]: time="2025-01-13T21:39:52.149914983Z" level=info msg="RemovePodSandbox for \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" Jan 13 21:39:52.149949 containerd[1791]: time="2025-01-13T21:39:52.149927375Z" level=info msg="Forcibly stopping sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\"" Jan 13 21:39:52.149976 containerd[1791]: time="2025-01-13T21:39:52.149960198Z" level=info msg="TearDown network for sandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" successfully" Jan 13 21:39:52.151268 containerd[1791]: time="2025-01-13T21:39:52.151236803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.151349 containerd[1791]: time="2025-01-13T21:39:52.151270796Z" level=info msg="RemovePodSandbox \"ceb40859e3c756951ea2f1eaa0b974e773eba1205f01058d2c9c1149ca4299a4\" returns successfully" Jan 13 21:39:52.151531 containerd[1791]: time="2025-01-13T21:39:52.151486595Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\"" Jan 13 21:39:52.151611 containerd[1791]: time="2025-01-13T21:39:52.151570726Z" level=info msg="TearDown network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" successfully" Jan 13 21:39:52.151611 containerd[1791]: time="2025-01-13T21:39:52.151581628Z" level=info msg="StopPodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" returns successfully" Jan 13 21:39:52.151823 containerd[1791]: time="2025-01-13T21:39:52.151789391Z" level=info msg="RemovePodSandbox for \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\"" Jan 13 21:39:52.151823 containerd[1791]: time="2025-01-13T21:39:52.151803127Z" level=info msg="Forcibly stopping sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\"" Jan 13 21:39:52.151862 containerd[1791]: time="2025-01-13T21:39:52.151835817Z" level=info msg="TearDown network for sandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" successfully" Jan 13 21:39:52.153363 containerd[1791]: time="2025-01-13T21:39:52.153327855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.153398 containerd[1791]: time="2025-01-13T21:39:52.153363680Z" level=info msg="RemovePodSandbox \"db77217095c9ecb6371672f242cc09a341f54df202c09532e4f112f63d3a1510\" returns successfully" Jan 13 21:39:52.153530 containerd[1791]: time="2025-01-13T21:39:52.153520436Z" level=info msg="StopPodSandbox for \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\"" Jan 13 21:39:52.153577 containerd[1791]: time="2025-01-13T21:39:52.153569005Z" level=info msg="TearDown network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" successfully" Jan 13 21:39:52.153600 containerd[1791]: time="2025-01-13T21:39:52.153577120Z" level=info msg="StopPodSandbox for \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" returns successfully" Jan 13 21:39:52.153756 containerd[1791]: time="2025-01-13T21:39:52.153722643Z" level=info msg="RemovePodSandbox for \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\"" Jan 13 21:39:52.153756 containerd[1791]: time="2025-01-13T21:39:52.153733949Z" level=info msg="Forcibly stopping sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\"" Jan 13 21:39:52.153798 containerd[1791]: time="2025-01-13T21:39:52.153764221Z" level=info msg="TearDown network for sandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" successfully" Jan 13 21:39:52.154930 containerd[1791]: time="2025-01-13T21:39:52.154890526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.154930 containerd[1791]: time="2025-01-13T21:39:52.154928320Z" level=info msg="RemovePodSandbox \"e63ad4a51f5393e758d19c60437c82464ef83518adcd9fc32fcb945c20848bc0\" returns successfully" Jan 13 21:39:52.155144 containerd[1791]: time="2025-01-13T21:39:52.155090872Z" level=info msg="StopPodSandbox for \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\"" Jan 13 21:39:52.155211 containerd[1791]: time="2025-01-13T21:39:52.155176207Z" level=info msg="TearDown network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\" successfully" Jan 13 21:39:52.155211 containerd[1791]: time="2025-01-13T21:39:52.155182429Z" level=info msg="StopPodSandbox for \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\" returns successfully" Jan 13 21:39:52.155355 containerd[1791]: time="2025-01-13T21:39:52.155326758Z" level=info msg="RemovePodSandbox for \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\"" Jan 13 21:39:52.155355 containerd[1791]: time="2025-01-13T21:39:52.155339034Z" level=info msg="Forcibly stopping sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\"" Jan 13 21:39:52.155439 containerd[1791]: time="2025-01-13T21:39:52.155392041Z" level=info msg="TearDown network for sandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\" successfully" Jan 13 21:39:52.156581 containerd[1791]: time="2025-01-13T21:39:52.156542369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.156581 containerd[1791]: time="2025-01-13T21:39:52.156562373Z" level=info msg="RemovePodSandbox \"3dc84a4b062fab2be341389ccabbc19c11a1dc41820a3b9d9a7bec8217ec143c\" returns successfully" Jan 13 21:39:52.156745 containerd[1791]: time="2025-01-13T21:39:52.156735793Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:52.156778 containerd[1791]: time="2025-01-13T21:39:52.156772543Z" level=info msg="TearDown network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" successfully" Jan 13 21:39:52.156798 containerd[1791]: time="2025-01-13T21:39:52.156778550Z" level=info msg="StopPodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" returns successfully" Jan 13 21:39:52.156893 containerd[1791]: time="2025-01-13T21:39:52.156882982Z" level=info msg="RemovePodSandbox for \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:52.156915 containerd[1791]: time="2025-01-13T21:39:52.156896533Z" level=info msg="Forcibly stopping sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\"" Jan 13 21:39:52.156945 containerd[1791]: time="2025-01-13T21:39:52.156929765Z" level=info msg="TearDown network for sandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" successfully" Jan 13 21:39:52.158140 containerd[1791]: time="2025-01-13T21:39:52.158106205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.158175 containerd[1791]: time="2025-01-13T21:39:52.158141415Z" level=info msg="RemovePodSandbox \"bd33c3bf4c42717273a6595c1a2af17c6b4773d32b5ca1f73f0b641a58f7ca0d\" returns successfully" Jan 13 21:39:52.158358 containerd[1791]: time="2025-01-13T21:39:52.158320893Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" Jan 13 21:39:52.158405 containerd[1791]: time="2025-01-13T21:39:52.158394274Z" level=info msg="TearDown network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" successfully" Jan 13 21:39:52.158405 containerd[1791]: time="2025-01-13T21:39:52.158400412Z" level=info msg="StopPodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" returns successfully" Jan 13 21:39:52.158581 containerd[1791]: time="2025-01-13T21:39:52.158544118Z" level=info msg="RemovePodSandbox for \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" Jan 13 21:39:52.158581 containerd[1791]: time="2025-01-13T21:39:52.158553925Z" level=info msg="Forcibly stopping sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\"" Jan 13 21:39:52.158622 containerd[1791]: time="2025-01-13T21:39:52.158608409Z" level=info msg="TearDown network for sandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" successfully" Jan 13 21:39:52.159742 containerd[1791]: time="2025-01-13T21:39:52.159702997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.159742 containerd[1791]: time="2025-01-13T21:39:52.159719816Z" level=info msg="RemovePodSandbox \"44f601fefee16962a59aaf4a92d46084a05807a929d2b065f02367de12d6913a\" returns successfully" Jan 13 21:39:52.159857 containerd[1791]: time="2025-01-13T21:39:52.159847356Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\"" Jan 13 21:39:52.159897 containerd[1791]: time="2025-01-13T21:39:52.159890417Z" level=info msg="TearDown network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" successfully" Jan 13 21:39:52.159921 containerd[1791]: time="2025-01-13T21:39:52.159897196Z" level=info msg="StopPodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" returns successfully" Jan 13 21:39:52.159996 containerd[1791]: time="2025-01-13T21:39:52.159988044Z" level=info msg="RemovePodSandbox for \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\"" Jan 13 21:39:52.160020 containerd[1791]: time="2025-01-13T21:39:52.159998830Z" level=info msg="Forcibly stopping sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\"" Jan 13 21:39:52.160082 containerd[1791]: time="2025-01-13T21:39:52.160050698Z" level=info msg="TearDown network for sandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" successfully" Jan 13 21:39:52.161263 containerd[1791]: time="2025-01-13T21:39:52.161195782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.161263 containerd[1791]: time="2025-01-13T21:39:52.161230876Z" level=info msg="RemovePodSandbox \"911c23c1ca07ba7b6e892bd945d49fad59d2eceeb01fd29a499a18cf2357cc2c\" returns successfully" Jan 13 21:39:52.161452 containerd[1791]: time="2025-01-13T21:39:52.161378939Z" level=info msg="StopPodSandbox for \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\"" Jan 13 21:39:52.161502 containerd[1791]: time="2025-01-13T21:39:52.161468555Z" level=info msg="TearDown network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" successfully" Jan 13 21:39:52.161502 containerd[1791]: time="2025-01-13T21:39:52.161491203Z" level=info msg="StopPodSandbox for \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" returns successfully" Jan 13 21:39:52.161654 containerd[1791]: time="2025-01-13T21:39:52.161624926Z" level=info msg="RemovePodSandbox for \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\"" Jan 13 21:39:52.161677 containerd[1791]: time="2025-01-13T21:39:52.161652597Z" level=info msg="Forcibly stopping sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\"" Jan 13 21:39:52.161705 containerd[1791]: time="2025-01-13T21:39:52.161689237Z" level=info msg="TearDown network for sandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" successfully" Jan 13 21:39:52.162776 containerd[1791]: time="2025-01-13T21:39:52.162741739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.162815 containerd[1791]: time="2025-01-13T21:39:52.162778299Z" level=info msg="RemovePodSandbox \"4cd3625ee4188ef2fda9e7b101ff7efe1ac42de742a4b8009b911b700a3a016b\" returns successfully" Jan 13 21:39:52.162890 containerd[1791]: time="2025-01-13T21:39:52.162881212Z" level=info msg="StopPodSandbox for \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\"" Jan 13 21:39:52.162930 containerd[1791]: time="2025-01-13T21:39:52.162922392Z" level=info msg="TearDown network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\" successfully" Jan 13 21:39:52.162947 containerd[1791]: time="2025-01-13T21:39:52.162929997Z" level=info msg="StopPodSandbox for \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\" returns successfully" Jan 13 21:39:52.163104 containerd[1791]: time="2025-01-13T21:39:52.163061604Z" level=info msg="RemovePodSandbox for \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\"" Jan 13 21:39:52.163104 containerd[1791]: time="2025-01-13T21:39:52.163073436Z" level=info msg="Forcibly stopping sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\"" Jan 13 21:39:52.163157 containerd[1791]: time="2025-01-13T21:39:52.163129478Z" level=info msg="TearDown network for sandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\" successfully" Jan 13 21:39:52.164254 containerd[1791]: time="2025-01-13T21:39:52.164220565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.164307 containerd[1791]: time="2025-01-13T21:39:52.164253829Z" level=info msg="RemovePodSandbox \"88e205fa95fd77da16da1565092af3731c1734612712a99570247c6d2fa71137\" returns successfully" Jan 13 21:39:52.164422 containerd[1791]: time="2025-01-13T21:39:52.164409557Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:52.164483 containerd[1791]: time="2025-01-13T21:39:52.164462999Z" level=info msg="TearDown network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" successfully" Jan 13 21:39:52.164516 containerd[1791]: time="2025-01-13T21:39:52.164483201Z" level=info msg="StopPodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" returns successfully" Jan 13 21:39:52.164692 containerd[1791]: time="2025-01-13T21:39:52.164680128Z" level=info msg="RemovePodSandbox for \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:52.164710 containerd[1791]: time="2025-01-13T21:39:52.164694745Z" level=info msg="Forcibly stopping sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\"" Jan 13 21:39:52.164741 containerd[1791]: time="2025-01-13T21:39:52.164726471Z" level=info msg="TearDown network for sandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" successfully" Jan 13 21:39:52.165803 containerd[1791]: time="2025-01-13T21:39:52.165764525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.165803 containerd[1791]: time="2025-01-13T21:39:52.165780354Z" level=info msg="RemovePodSandbox \"2edd6be61d94da0ca610d6edf70d72b828559c74b24cfe33faa5a63085332b62\" returns successfully" Jan 13 21:39:52.165932 containerd[1791]: time="2025-01-13T21:39:52.165922855Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" Jan 13 21:39:52.165976 containerd[1791]: time="2025-01-13T21:39:52.165969841Z" level=info msg="TearDown network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" successfully" Jan 13 21:39:52.165997 containerd[1791]: time="2025-01-13T21:39:52.165976694Z" level=info msg="StopPodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" returns successfully" Jan 13 21:39:52.166170 containerd[1791]: time="2025-01-13T21:39:52.166133040Z" level=info msg="RemovePodSandbox for \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" Jan 13 21:39:52.166170 containerd[1791]: time="2025-01-13T21:39:52.166143394Z" level=info msg="Forcibly stopping sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\"" Jan 13 21:39:52.166254 containerd[1791]: time="2025-01-13T21:39:52.166203071Z" level=info msg="TearDown network for sandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" successfully" Jan 13 21:39:52.167352 containerd[1791]: time="2025-01-13T21:39:52.167302676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.167352 containerd[1791]: time="2025-01-13T21:39:52.167346365Z" level=info msg="RemovePodSandbox \"cc71bd05e173b6f262d81f866b9cfa064d2bdbc7b4b64b6a775e627bf3c80a77\" returns successfully" Jan 13 21:39:52.167556 containerd[1791]: time="2025-01-13T21:39:52.167546703Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\"" Jan 13 21:39:52.167692 containerd[1791]: time="2025-01-13T21:39:52.167621230Z" level=info msg="TearDown network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" successfully" Jan 13 21:39:52.167692 containerd[1791]: time="2025-01-13T21:39:52.167685919Z" level=info msg="StopPodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" returns successfully" Jan 13 21:39:52.167869 containerd[1791]: time="2025-01-13T21:39:52.167834988Z" level=info msg="RemovePodSandbox for \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\"" Jan 13 21:39:52.167869 containerd[1791]: time="2025-01-13T21:39:52.167862710Z" level=info msg="Forcibly stopping sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\"" Jan 13 21:39:52.167919 containerd[1791]: time="2025-01-13T21:39:52.167904657Z" level=info msg="TearDown network for sandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" successfully" Jan 13 21:39:52.169043 containerd[1791]: time="2025-01-13T21:39:52.168986878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.169105 containerd[1791]: time="2025-01-13T21:39:52.169042079Z" level=info msg="RemovePodSandbox \"f0ec76426419d9975e01995bf9bef4bd34c6825b7fa49e9d22809931691ed8a9\" returns successfully" Jan 13 21:39:52.169394 containerd[1791]: time="2025-01-13T21:39:52.169340145Z" level=info msg="StopPodSandbox for \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\"" Jan 13 21:39:52.169394 containerd[1791]: time="2025-01-13T21:39:52.169390688Z" level=info msg="TearDown network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" successfully" Jan 13 21:39:52.169507 containerd[1791]: time="2025-01-13T21:39:52.169415195Z" level=info msg="StopPodSandbox for \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" returns successfully" Jan 13 21:39:52.169685 containerd[1791]: time="2025-01-13T21:39:52.169648365Z" level=info msg="RemovePodSandbox for \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\"" Jan 13 21:39:52.169685 containerd[1791]: time="2025-01-13T21:39:52.169681184Z" level=info msg="Forcibly stopping sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\"" Jan 13 21:39:52.169754 containerd[1791]: time="2025-01-13T21:39:52.169732148Z" level=info msg="TearDown network for sandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" successfully" Jan 13 21:39:52.171000 containerd[1791]: time="2025-01-13T21:39:52.170986654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.171086 containerd[1791]: time="2025-01-13T21:39:52.171009393Z" level=info msg="RemovePodSandbox \"a9d0fd93dbd5971914a2ab7bd28c95ef2beae5a5282c58ab613f0b5716c9a817\" returns successfully" Jan 13 21:39:52.171342 containerd[1791]: time="2025-01-13T21:39:52.171259716Z" level=info msg="StopPodSandbox for \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\"" Jan 13 21:39:52.171431 containerd[1791]: time="2025-01-13T21:39:52.171374615Z" level=info msg="TearDown network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\" successfully" Jan 13 21:39:52.171431 containerd[1791]: time="2025-01-13T21:39:52.171395893Z" level=info msg="StopPodSandbox for \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\" returns successfully" Jan 13 21:39:52.171653 containerd[1791]: time="2025-01-13T21:39:52.171626089Z" level=info msg="RemovePodSandbox for \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\"" Jan 13 21:39:52.171653 containerd[1791]: time="2025-01-13T21:39:52.171652850Z" level=info msg="Forcibly stopping sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\"" Jan 13 21:39:52.171787 containerd[1791]: time="2025-01-13T21:39:52.171720863Z" level=info msg="TearDown network for sandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\" successfully" Jan 13 21:39:52.173110 containerd[1791]: time="2025-01-13T21:39:52.173048611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.173110 containerd[1791]: time="2025-01-13T21:39:52.173096537Z" level=info msg="RemovePodSandbox \"ae00a25b39547808fd2072ca2eaf3a2243f0d7509cc5fddce5d7ec2b1b87bdb0\" returns successfully" Jan 13 21:39:52.173337 containerd[1791]: time="2025-01-13T21:39:52.173299242Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:52.173381 containerd[1791]: time="2025-01-13T21:39:52.173337394Z" level=info msg="TearDown network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" successfully" Jan 13 21:39:52.173381 containerd[1791]: time="2025-01-13T21:39:52.173343584Z" level=info msg="StopPodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" returns successfully" Jan 13 21:39:52.173498 containerd[1791]: time="2025-01-13T21:39:52.173471180Z" level=info msg="RemovePodSandbox for \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:52.173498 containerd[1791]: time="2025-01-13T21:39:52.173496053Z" level=info msg="Forcibly stopping sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\"" Jan 13 21:39:52.173554 containerd[1791]: time="2025-01-13T21:39:52.173537389Z" level=info msg="TearDown network for sandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" successfully" Jan 13 21:39:52.174689 containerd[1791]: time="2025-01-13T21:39:52.174649775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.174689 containerd[1791]: time="2025-01-13T21:39:52.174666338Z" level=info msg="RemovePodSandbox \"13815a139191e5adc9f3d82c3ae189cae5db2428baea3f42a8162aa56372d808\" returns successfully" Jan 13 21:39:52.174826 containerd[1791]: time="2025-01-13T21:39:52.174792343Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" Jan 13 21:39:52.174859 containerd[1791]: time="2025-01-13T21:39:52.174847426Z" level=info msg="TearDown network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" successfully" Jan 13 21:39:52.174859 containerd[1791]: time="2025-01-13T21:39:52.174853844Z" level=info msg="StopPodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" returns successfully" Jan 13 21:39:52.174963 containerd[1791]: time="2025-01-13T21:39:52.174952314Z" level=info msg="RemovePodSandbox for \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" Jan 13 21:39:52.174990 containerd[1791]: time="2025-01-13T21:39:52.174965669Z" level=info msg="Forcibly stopping sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\"" Jan 13 21:39:52.175018 containerd[1791]: time="2025-01-13T21:39:52.174997300Z" level=info msg="TearDown network for sandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" successfully" Jan 13 21:39:52.176168 containerd[1791]: time="2025-01-13T21:39:52.176120663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.176168 containerd[1791]: time="2025-01-13T21:39:52.176162033Z" level=info msg="RemovePodSandbox \"51a3d55c1185a4222591bbf8063144ab663550b1079647581abfdc79b085cb59\" returns successfully" Jan 13 21:39:52.176375 containerd[1791]: time="2025-01-13T21:39:52.176334895Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\"" Jan 13 21:39:52.176423 containerd[1791]: time="2025-01-13T21:39:52.176375735Z" level=info msg="TearDown network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" successfully" Jan 13 21:39:52.176423 containerd[1791]: time="2025-01-13T21:39:52.176382194Z" level=info msg="StopPodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" returns successfully" Jan 13 21:39:52.176553 containerd[1791]: time="2025-01-13T21:39:52.176519957Z" level=info msg="RemovePodSandbox for \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\"" Jan 13 21:39:52.176553 containerd[1791]: time="2025-01-13T21:39:52.176548339Z" level=info msg="Forcibly stopping sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\"" Jan 13 21:39:52.176603 containerd[1791]: time="2025-01-13T21:39:52.176581605Z" level=info msg="TearDown network for sandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" successfully" Jan 13 21:39:52.177759 containerd[1791]: time="2025-01-13T21:39:52.177714059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.177759 containerd[1791]: time="2025-01-13T21:39:52.177757152Z" level=info msg="RemovePodSandbox \"741f2e5c9cea3389571f0064e43cf791cbbd26633eeaa83fe3824c6de582c7a1\" returns successfully" Jan 13 21:39:52.177911 containerd[1791]: time="2025-01-13T21:39:52.177876905Z" level=info msg="StopPodSandbox for \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\"" Jan 13 21:39:52.177947 containerd[1791]: time="2025-01-13T21:39:52.177932656Z" level=info msg="TearDown network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" successfully" Jan 13 21:39:52.177947 containerd[1791]: time="2025-01-13T21:39:52.177939254Z" level=info msg="StopPodSandbox for \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" returns successfully" Jan 13 21:39:52.178152 containerd[1791]: time="2025-01-13T21:39:52.178091366Z" level=info msg="RemovePodSandbox for \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\"" Jan 13 21:39:52.178152 containerd[1791]: time="2025-01-13T21:39:52.178115384Z" level=info msg="Forcibly stopping sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\"" Jan 13 21:39:52.178248 containerd[1791]: time="2025-01-13T21:39:52.178186187Z" level=info msg="TearDown network for sandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" successfully" Jan 13 21:39:52.179315 containerd[1791]: time="2025-01-13T21:39:52.179267673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.179315 containerd[1791]: time="2025-01-13T21:39:52.179284773Z" level=info msg="RemovePodSandbox \"30f58528c110433dcf8157d10008d10a09045878ae8d1f90d090a510796d3cdf\" returns successfully" Jan 13 21:39:52.179511 containerd[1791]: time="2025-01-13T21:39:52.179454193Z" level=info msg="StopPodSandbox for \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\"" Jan 13 21:39:52.179511 containerd[1791]: time="2025-01-13T21:39:52.179508322Z" level=info msg="TearDown network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\" successfully" Jan 13 21:39:52.179609 containerd[1791]: time="2025-01-13T21:39:52.179530950Z" level=info msg="StopPodSandbox for \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\" returns successfully" Jan 13 21:39:52.179716 containerd[1791]: time="2025-01-13T21:39:52.179681176Z" level=info msg="RemovePodSandbox for \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\"" Jan 13 21:39:52.179716 containerd[1791]: time="2025-01-13T21:39:52.179691686Z" level=info msg="Forcibly stopping sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\"" Jan 13 21:39:52.179772 containerd[1791]: time="2025-01-13T21:39:52.179736907Z" level=info msg="TearDown network for sandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\" successfully" Jan 13 21:39:52.180870 containerd[1791]: time="2025-01-13T21:39:52.180830618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.180870 containerd[1791]: time="2025-01-13T21:39:52.180851181Z" level=info msg="RemovePodSandbox \"e184a9f776089c1c59cf73eea2d51ef2f1fe1c0ca482ba8fa08410dc9eb57e0e\" returns successfully" Jan 13 21:39:52.180979 containerd[1791]: time="2025-01-13T21:39:52.180969290Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:52.181091 containerd[1791]: time="2025-01-13T21:39:52.181021900Z" level=info msg="TearDown network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" successfully" Jan 13 21:39:52.181091 containerd[1791]: time="2025-01-13T21:39:52.181029642Z" level=info msg="StopPodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" returns successfully" Jan 13 21:39:52.181269 containerd[1791]: time="2025-01-13T21:39:52.181215961Z" level=info msg="RemovePodSandbox for \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:52.181269 containerd[1791]: time="2025-01-13T21:39:52.181241129Z" level=info msg="Forcibly stopping sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\"" Jan 13 21:39:52.181336 containerd[1791]: time="2025-01-13T21:39:52.181303531Z" level=info msg="TearDown network for sandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" successfully" Jan 13 21:39:52.182542 containerd[1791]: time="2025-01-13T21:39:52.182498918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.182542 containerd[1791]: time="2025-01-13T21:39:52.182515622Z" level=info msg="RemovePodSandbox \"009387a7795657200ce8b2497ceaaf6c86a7e73daa5f7dcf5e9b1f81fb7e4555\" returns successfully" Jan 13 21:39:52.182805 containerd[1791]: time="2025-01-13T21:39:52.182748241Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" Jan 13 21:39:52.182857 containerd[1791]: time="2025-01-13T21:39:52.182828464Z" level=info msg="TearDown network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" successfully" Jan 13 21:39:52.182857 containerd[1791]: time="2025-01-13T21:39:52.182836544Z" level=info msg="StopPodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" returns successfully" Jan 13 21:39:52.182988 containerd[1791]: time="2025-01-13T21:39:52.182979567Z" level=info msg="RemovePodSandbox for \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" Jan 13 21:39:52.183015 containerd[1791]: time="2025-01-13T21:39:52.182994606Z" level=info msg="Forcibly stopping sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\"" Jan 13 21:39:52.183116 containerd[1791]: time="2025-01-13T21:39:52.183028615Z" level=info msg="TearDown network for sandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" successfully" Jan 13 21:39:52.184203 containerd[1791]: time="2025-01-13T21:39:52.184164281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.184203 containerd[1791]: time="2025-01-13T21:39:52.184183697Z" level=info msg="RemovePodSandbox \"31459d112e61b099febfa35e86f47dd36a74e8cd97338815b30be610292807ce\" returns successfully" Jan 13 21:39:52.184383 containerd[1791]: time="2025-01-13T21:39:52.184348336Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\"" Jan 13 21:39:52.184433 containerd[1791]: time="2025-01-13T21:39:52.184423895Z" level=info msg="TearDown network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" successfully" Jan 13 21:39:52.184453 containerd[1791]: time="2025-01-13T21:39:52.184431887Z" level=info msg="StopPodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" returns successfully" Jan 13 21:39:52.184690 containerd[1791]: time="2025-01-13T21:39:52.184638194Z" level=info msg="RemovePodSandbox for \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\"" Jan 13 21:39:52.184690 containerd[1791]: time="2025-01-13T21:39:52.184663463Z" level=info msg="Forcibly stopping sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\"" Jan 13 21:39:52.184776 containerd[1791]: time="2025-01-13T21:39:52.184693103Z" level=info msg="TearDown network for sandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" successfully" Jan 13 21:39:52.185866 containerd[1791]: time="2025-01-13T21:39:52.185818191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.185866 containerd[1791]: time="2025-01-13T21:39:52.185835002Z" level=info msg="RemovePodSandbox \"ffdc8f6f610466cfe603ced5b3db58c3a38bd5b12e1a8f6e715b5cac381a870c\" returns successfully" Jan 13 21:39:52.186000 containerd[1791]: time="2025-01-13T21:39:52.185988846Z" level=info msg="StopPodSandbox for \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\"" Jan 13 21:39:52.186095 containerd[1791]: time="2025-01-13T21:39:52.186054314Z" level=info msg="TearDown network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" successfully" Jan 13 21:39:52.186095 containerd[1791]: time="2025-01-13T21:39:52.186062659Z" level=info msg="StopPodSandbox for \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" returns successfully" Jan 13 21:39:52.186219 containerd[1791]: time="2025-01-13T21:39:52.186191132Z" level=info msg="RemovePodSandbox for \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\"" Jan 13 21:39:52.186219 containerd[1791]: time="2025-01-13T21:39:52.186205060Z" level=info msg="Forcibly stopping sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\"" Jan 13 21:39:52.186321 containerd[1791]: time="2025-01-13T21:39:52.186236990Z" level=info msg="TearDown network for sandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" successfully" Jan 13 21:39:52.187655 containerd[1791]: time="2025-01-13T21:39:52.187622350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.187655 containerd[1791]: time="2025-01-13T21:39:52.187640477Z" level=info msg="RemovePodSandbox \"87336689dba362b33a1aed034a3f72e1417717ce79d3989184e736d4f8dcc96f\" returns successfully" Jan 13 21:39:52.187879 containerd[1791]: time="2025-01-13T21:39:52.187844130Z" level=info msg="StopPodSandbox for \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\"" Jan 13 21:39:52.187909 containerd[1791]: time="2025-01-13T21:39:52.187902015Z" level=info msg="TearDown network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\" successfully" Jan 13 21:39:52.187935 containerd[1791]: time="2025-01-13T21:39:52.187908332Z" level=info msg="StopPodSandbox for \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\" returns successfully" Jan 13 21:39:52.188084 containerd[1791]: time="2025-01-13T21:39:52.188021992Z" level=info msg="RemovePodSandbox for \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\"" Jan 13 21:39:52.188084 containerd[1791]: time="2025-01-13T21:39:52.188057259Z" level=info msg="Forcibly stopping sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\"" Jan 13 21:39:52.188130 containerd[1791]: time="2025-01-13T21:39:52.188108020Z" level=info msg="TearDown network for sandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\" successfully" Jan 13 21:39:52.189397 containerd[1791]: time="2025-01-13T21:39:52.189352969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.189397 containerd[1791]: time="2025-01-13T21:39:52.189398036Z" level=info msg="RemovePodSandbox \"23eed47e8e311c249160bc5ee9bc5bfc2cf3b901f6333aa4b5030c3c7338077e\" returns successfully" Jan 13 21:39:52.189618 containerd[1791]: time="2025-01-13T21:39:52.189570906Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:52.189767 containerd[1791]: time="2025-01-13T21:39:52.189692223Z" level=info msg="TearDown network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" successfully" Jan 13 21:39:52.189767 containerd[1791]: time="2025-01-13T21:39:52.189729261Z" level=info msg="StopPodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" returns successfully" Jan 13 21:39:52.189952 containerd[1791]: time="2025-01-13T21:39:52.189915465Z" level=info msg="RemovePodSandbox for \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:52.189952 containerd[1791]: time="2025-01-13T21:39:52.189946293Z" level=info msg="Forcibly stopping sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\"" Jan 13 21:39:52.190009 containerd[1791]: time="2025-01-13T21:39:52.189974831Z" level=info msg="TearDown network for sandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" successfully" Jan 13 21:39:52.191210 containerd[1791]: time="2025-01-13T21:39:52.191168491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.191210 containerd[1791]: time="2025-01-13T21:39:52.191186471Z" level=info msg="RemovePodSandbox \"ab06c18aee3e9fbe43db9e4ed27037c97c70906407459ae05d49312078e3e946\" returns successfully" Jan 13 21:39:52.191473 containerd[1791]: time="2025-01-13T21:39:52.191428328Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" Jan 13 21:39:52.191473 containerd[1791]: time="2025-01-13T21:39:52.191470675Z" level=info msg="TearDown network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" successfully" Jan 13 21:39:52.191518 containerd[1791]: time="2025-01-13T21:39:52.191476986Z" level=info msg="StopPodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" returns successfully" Jan 13 21:39:52.191640 containerd[1791]: time="2025-01-13T21:39:52.191601978Z" level=info msg="RemovePodSandbox for \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" Jan 13 21:39:52.191640 containerd[1791]: time="2025-01-13T21:39:52.191612248Z" level=info msg="Forcibly stopping sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\"" Jan 13 21:39:52.191717 containerd[1791]: time="2025-01-13T21:39:52.191639272Z" level=info msg="TearDown network for sandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" successfully" Jan 13 21:39:52.192970 containerd[1791]: time="2025-01-13T21:39:52.192927102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.192970 containerd[1791]: time="2025-01-13T21:39:52.192944318Z" level=info msg="RemovePodSandbox \"1ab5ab1b1ac93a6bf5538a8c446f1efb210c79c02e5051c5ef81aa75761d444a\" returns successfully" Jan 13 21:39:52.193177 containerd[1791]: time="2025-01-13T21:39:52.193147753Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\"" Jan 13 21:39:52.193240 containerd[1791]: time="2025-01-13T21:39:52.193235278Z" level=info msg="TearDown network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" successfully" Jan 13 21:39:52.193268 containerd[1791]: time="2025-01-13T21:39:52.193241171Z" level=info msg="StopPodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" returns successfully" Jan 13 21:39:52.193457 containerd[1791]: time="2025-01-13T21:39:52.193446542Z" level=info msg="RemovePodSandbox for \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\"" Jan 13 21:39:52.193497 containerd[1791]: time="2025-01-13T21:39:52.193460496Z" level=info msg="Forcibly stopping sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\"" Jan 13 21:39:52.193533 containerd[1791]: time="2025-01-13T21:39:52.193504002Z" level=info msg="TearDown network for sandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" successfully" Jan 13 21:39:52.194719 containerd[1791]: time="2025-01-13T21:39:52.194709038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.194745 containerd[1791]: time="2025-01-13T21:39:52.194726641Z" level=info msg="RemovePodSandbox \"31ead84fc974e9d142ca0022a0cf84ff02ef1ab5b9571c6ef2a9a5cf731e13be\" returns successfully" Jan 13 21:39:52.194874 containerd[1791]: time="2025-01-13T21:39:52.194863627Z" level=info msg="StopPodSandbox for \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\"" Jan 13 21:39:52.194914 containerd[1791]: time="2025-01-13T21:39:52.194906310Z" level=info msg="TearDown network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" successfully" Jan 13 21:39:52.194933 containerd[1791]: time="2025-01-13T21:39:52.194913281Z" level=info msg="StopPodSandbox for \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" returns successfully" Jan 13 21:39:52.195039 containerd[1791]: time="2025-01-13T21:39:52.195027012Z" level=info msg="RemovePodSandbox for \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\"" Jan 13 21:39:52.195061 containerd[1791]: time="2025-01-13T21:39:52.195041314Z" level=info msg="Forcibly stopping sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\"" Jan 13 21:39:52.195088 containerd[1791]: time="2025-01-13T21:39:52.195071996Z" level=info msg="TearDown network for sandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" successfully" Jan 13 21:39:52.196909 containerd[1791]: time="2025-01-13T21:39:52.196865471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.196909 containerd[1791]: time="2025-01-13T21:39:52.196891014Z" level=info msg="RemovePodSandbox \"377b38fe021dfdc9475e7c7178d1ce28fd868791636a3e0ffc987eab66d2a62f\" returns successfully" Jan 13 21:39:52.197072 containerd[1791]: time="2025-01-13T21:39:52.197031304Z" level=info msg="StopPodSandbox for \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\"" Jan 13 21:39:52.197108 containerd[1791]: time="2025-01-13T21:39:52.197080765Z" level=info msg="TearDown network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\" successfully" Jan 13 21:39:52.197129 containerd[1791]: time="2025-01-13T21:39:52.197110953Z" level=info msg="StopPodSandbox for \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\" returns successfully" Jan 13 21:39:52.197243 containerd[1791]: time="2025-01-13T21:39:52.197203526Z" level=info msg="RemovePodSandbox for \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\"" Jan 13 21:39:52.197243 containerd[1791]: time="2025-01-13T21:39:52.197213456Z" level=info msg="Forcibly stopping sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\"" Jan 13 21:39:52.197306 containerd[1791]: time="2025-01-13T21:39:52.197244367Z" level=info msg="TearDown network for sandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\" successfully" Jan 13 21:39:52.198478 containerd[1791]: time="2025-01-13T21:39:52.198463222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:39:52.198511 containerd[1791]: time="2025-01-13T21:39:52.198488797Z" level=info msg="RemovePodSandbox \"90184776656846672a48fce30f84e8c4d8f9d4fff5b398619b3aaac07294b546\" returns successfully" Jan 13 21:39:54.415732 systemd[1]: Started sshd@12-86.109.11.45:22-218.92.0.140:45667.service - OpenSSH per-connection server daemon (218.92.0.140:45667). Jan 13 21:39:55.590781 sshd-session[6937]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:39:56.802210 kubelet[3268]: I0113 21:39:56.802121 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:39:57.672316 sshd[6935]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:39:57.986648 sshd-session[6943]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:40:00.008651 sshd[6935]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:40:00.323082 sshd-session[6944]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:40:02.089222 sshd[6935]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:40:02.245556 sshd[6935]: Received disconnect from 218.92.0.140 port 45667:11: [preauth] Jan 13 21:40:02.245556 sshd[6935]: Disconnected from authenticating user root 218.92.0.140 port 45667 [preauth] Jan 13 21:40:02.248895 systemd[1]: sshd@12-86.109.11.45:22-218.92.0.140:45667.service: Deactivated successfully. Jan 13 21:40:03.989061 kubelet[3268]: I0113 21:40:03.988939 3268 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:41:09.440725 systemd[1]: Started sshd@13-86.109.11.45:22-218.92.0.140:58080.service - OpenSSH per-connection server daemon (218.92.0.140:58080). Jan 13 21:41:10.604818 sshd-session[7134]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:41:12.651449 sshd[7132]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:41:12.968725 sshd-session[7135]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:41:15.291107 sshd[7132]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:41:15.601744 sshd-session[7136]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:41:17.000501 sshd[7132]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:41:17.155370 sshd[7132]: Received disconnect from 218.92.0.140 port 58080:11: [preauth] Jan 13 21:41:17.155370 sshd[7132]: Disconnected from authenticating user root 218.92.0.140 port 58080 [preauth] Jan 13 21:41:17.158796 systemd[1]: sshd@13-86.109.11.45:22-218.92.0.140:58080.service: Deactivated successfully. Jan 13 21:42:28.667699 systemd[1]: Started sshd@14-86.109.11.45:22-218.92.0.140:26852.service - OpenSSH per-connection server daemon (218.92.0.140:26852). Jan 13 21:42:29.897362 sshd-session[7289]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:42:31.788439 sshd[7287]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:42:32.117902 sshd-session[7290]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:42:34.284316 sshd[7287]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:42:34.613829 sshd-session[7296]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:42:36.524267 sshd[7287]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:42:36.688213 sshd[7287]: Received disconnect from 218.92.0.140 port 26852:11: [preauth] Jan 13 21:42:36.688213 sshd[7287]: Disconnected from authenticating user root 218.92.0.140 port 26852 [preauth] Jan 13 21:42:36.689475 systemd[1]: sshd@14-86.109.11.45:22-218.92.0.140:26852.service: Deactivated successfully. Jan 13 21:43:02.985209 systemd[1]: Started sshd@15-86.109.11.45:22-92.255.85.189:39254.service - OpenSSH per-connection server daemon (92.255.85.189:39254). Jan 13 21:43:04.304133 sshd[7384]: Connection closed by authenticating user root 92.255.85.189 port 39254 [preauth] Jan 13 21:43:04.307202 systemd[1]: sshd@15-86.109.11.45:22-92.255.85.189:39254.service: Deactivated successfully. Jan 13 21:43:46.944279 systemd[1]: Started sshd@16-86.109.11.45:22-218.92.0.140:41019.service - OpenSSH per-connection server daemon (218.92.0.140:41019). Jan 13 21:43:48.118954 sshd-session[7493]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:43:50.386444 sshd[7491]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:43:50.697134 sshd-session[7494]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:43:53.375349 sshd[7491]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:43:53.685101 sshd-session[7516]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:43:55.440874 sshd[7491]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:43:55.594855 sshd[7491]: Received disconnect from 218.92.0.140 port 41019:11: [preauth] Jan 13 21:43:55.594855 sshd[7491]: Disconnected from authenticating user root 218.92.0.140 port 41019 [preauth] Jan 13 21:43:55.598301 systemd[1]: sshd@16-86.109.11.45:22-218.92.0.140:41019.service: Deactivated successfully. Jan 13 21:44:41.899863 systemd[1]: Started sshd@17-86.109.11.45:22-193.32.162.132:49486.service - OpenSSH per-connection server daemon (193.32.162.132:49486). Jan 13 21:44:42.607555 sshd[7617]: Invalid user vali from 193.32.162.132 port 49486 Jan 13 21:44:42.780320 sshd[7617]: Connection closed by invalid user vali 193.32.162.132 port 49486 [preauth] Jan 13 21:44:42.783675 systemd[1]: sshd@17-86.109.11.45:22-193.32.162.132:49486.service: Deactivated successfully. Jan 13 21:44:57.118763 systemd[1]: Started sshd@18-86.109.11.45:22-147.75.109.163:46800.service - OpenSSH per-connection server daemon (147.75.109.163:46800). Jan 13 21:44:57.172023 sshd[7673]: Accepted publickey for core from 147.75.109.163 port 46800 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:44:57.172887 sshd-session[7673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:44:57.176513 systemd-logind[1781]: New session 12 of user core. Jan 13 21:44:57.190251 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:44:57.287953 sshd[7675]: Connection closed by 147.75.109.163 port 46800 Jan 13 21:44:57.288098 sshd-session[7673]: pam_unix(sshd:session): session closed for user core Jan 13 21:44:57.289668 systemd[1]: sshd@18-86.109.11.45:22-147.75.109.163:46800.service: Deactivated successfully. Jan 13 21:44:57.290562 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:44:57.291278 systemd-logind[1781]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:44:57.291867 systemd-logind[1781]: Removed session 12. Jan 13 21:45:01.514239 systemd[1]: Started sshd@19-86.109.11.45:22-218.92.0.140:52038.service - OpenSSH per-connection server daemon (218.92.0.140:52038). Jan 13 21:45:02.302433 systemd[1]: Started sshd@20-86.109.11.45:22-147.75.109.163:52480.service - OpenSSH per-connection server daemon (147.75.109.163:52480). Jan 13 21:45:02.333527 sshd[7707]: Accepted publickey for core from 147.75.109.163 port 52480 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:02.334298 sshd-session[7707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:02.337254 systemd-logind[1781]: New session 13 of user core. Jan 13 21:45:02.349260 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:45:02.442167 sshd[7709]: Connection closed by 147.75.109.163 port 52480 Jan 13 21:45:02.442386 sshd-session[7707]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:02.444179 systemd[1]: sshd@20-86.109.11.45:22-147.75.109.163:52480.service: Deactivated successfully. Jan 13 21:45:02.445279 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:45:02.446122 systemd-logind[1781]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:45:02.446904 systemd-logind[1781]: Removed session 13. Jan 13 21:45:02.648037 sshd-session[7735]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:45:04.544054 sshd[7704]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:45:04.855419 sshd-session[7736]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:45:06.360041 sshd[7704]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:45:06.671146 sshd-session[7737]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.140 user=root Jan 13 21:45:07.461385 systemd[1]: Started sshd@21-86.109.11.45:22-147.75.109.163:46916.service - OpenSSH per-connection server daemon (147.75.109.163:46916). Jan 13 21:45:07.490258 sshd[7741]: Accepted publickey for core from 147.75.109.163 port 46916 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:07.490944 sshd-session[7741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:07.493689 systemd-logind[1781]: New session 14 of user core. Jan 13 21:45:07.506499 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:45:07.600151 sshd[7743]: Connection closed by 147.75.109.163 port 46916 Jan 13 21:45:07.600335 sshd-session[7741]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:07.618703 systemd[1]: sshd@21-86.109.11.45:22-147.75.109.163:46916.service: Deactivated successfully. Jan 13 21:45:07.619514 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:45:07.620206 systemd-logind[1781]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:45:07.620921 systemd[1]: Started sshd@22-86.109.11.45:22-147.75.109.163:46924.service - OpenSSH per-connection server daemon (147.75.109.163:46924). Jan 13 21:45:07.621414 systemd-logind[1781]: Removed session 14. Jan 13 21:45:07.652583 sshd[7766]: Accepted publickey for core from 147.75.109.163 port 46924 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:07.653440 sshd-session[7766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:07.656694 systemd-logind[1781]: New session 15 of user core. Jan 13 21:45:07.677259 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:45:07.779705 sshd[7768]: Connection closed by 147.75.109.163 port 46924 Jan 13 21:45:07.779901 sshd-session[7766]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:07.790683 systemd[1]: sshd@22-86.109.11.45:22-147.75.109.163:46924.service: Deactivated successfully. Jan 13 21:45:07.791553 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:45:07.792264 systemd-logind[1781]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:45:07.792921 systemd[1]: Started sshd@23-86.109.11.45:22-147.75.109.163:46932.service - OpenSSH per-connection server daemon (147.75.109.163:46932). Jan 13 21:45:07.793385 systemd-logind[1781]: Removed session 15. Jan 13 21:45:07.822264 sshd[7790]: Accepted publickey for core from 147.75.109.163 port 46932 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:07.825579 sshd-session[7790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:07.836918 systemd-logind[1781]: New session 16 of user core. Jan 13 21:45:07.852527 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:45:08.000769 sshd[7794]: Connection closed by 147.75.109.163 port 46932 Jan 13 21:45:08.000952 sshd-session[7790]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:08.002572 systemd[1]: sshd@23-86.109.11.45:22-147.75.109.163:46932.service: Deactivated successfully. Jan 13 21:45:08.003506 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:45:08.004210 systemd-logind[1781]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:45:08.004794 systemd-logind[1781]: Removed session 16. Jan 13 21:45:08.115610 sshd[7704]: PAM: Permission denied for root from 218.92.0.140 Jan 13 21:45:08.270711 sshd[7704]: Received disconnect from 218.92.0.140 port 52038:11: [preauth] Jan 13 21:45:08.270711 sshd[7704]: Disconnected from authenticating user root 218.92.0.140 port 52038 [preauth] Jan 13 21:45:08.275778 systemd[1]: sshd@19-86.109.11.45:22-218.92.0.140:52038.service: Deactivated successfully. Jan 13 21:45:13.010993 systemd[1]: Started sshd@24-86.109.11.45:22-147.75.109.163:46948.service - OpenSSH per-connection server daemon (147.75.109.163:46948). Jan 13 21:45:13.048131 sshd[7850]: Accepted publickey for core from 147.75.109.163 port 46948 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:13.048960 sshd-session[7850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:13.051847 systemd-logind[1781]: New session 17 of user core. Jan 13 21:45:13.062157 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:45:13.148920 sshd[7852]: Connection closed by 147.75.109.163 port 46948 Jan 13 21:45:13.149111 sshd-session[7850]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:13.150916 systemd[1]: sshd@24-86.109.11.45:22-147.75.109.163:46948.service: Deactivated successfully. Jan 13 21:45:13.151885 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:45:13.152352 systemd-logind[1781]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:45:13.152930 systemd-logind[1781]: Removed session 17. Jan 13 21:45:18.159101 systemd[1]: Started sshd@25-86.109.11.45:22-147.75.109.163:36730.service - OpenSSH per-connection server daemon (147.75.109.163:36730). Jan 13 21:45:18.188242 sshd[7877]: Accepted publickey for core from 147.75.109.163 port 36730 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:18.188982 sshd-session[7877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:18.191763 systemd-logind[1781]: New session 18 of user core. Jan 13 21:45:18.206294 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:45:18.296145 sshd[7879]: Connection closed by 147.75.109.163 port 36730 Jan 13 21:45:18.296382 sshd-session[7877]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:18.311817 systemd[1]: sshd@25-86.109.11.45:22-147.75.109.163:36730.service: Deactivated successfully. Jan 13 21:45:18.312735 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:45:18.313588 systemd-logind[1781]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:45:18.314385 systemd[1]: Started sshd@26-86.109.11.45:22-147.75.109.163:36736.service - OpenSSH per-connection server daemon (147.75.109.163:36736). Jan 13 21:45:18.314963 systemd-logind[1781]: Removed session 18. Jan 13 21:45:18.348068 sshd[7903]: Accepted publickey for core from 147.75.109.163 port 36736 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:18.348940 sshd-session[7903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:18.352236 systemd-logind[1781]: New session 19 of user core. Jan 13 21:45:18.364288 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:45:18.500283 sshd[7906]: Connection closed by 147.75.109.163 port 36736 Jan 13 21:45:18.500438 sshd-session[7903]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:18.518815 systemd[1]: sshd@26-86.109.11.45:22-147.75.109.163:36736.service: Deactivated successfully. Jan 13 21:45:18.519724 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:45:18.520508 systemd-logind[1781]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:45:18.521296 systemd[1]: Started sshd@27-86.109.11.45:22-147.75.109.163:36750.service - OpenSSH per-connection server daemon (147.75.109.163:36750). Jan 13 21:45:18.521945 systemd-logind[1781]: Removed session 19. Jan 13 21:45:18.555290 sshd[7928]: Accepted publickey for core from 147.75.109.163 port 36750 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:18.556258 sshd-session[7928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:18.560040 systemd-logind[1781]: New session 20 of user core. Jan 13 21:45:18.569278 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:45:19.738793 sshd[7930]: Connection closed by 147.75.109.163 port 36750 Jan 13 21:45:19.739016 sshd-session[7928]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:19.754792 systemd[1]: sshd@27-86.109.11.45:22-147.75.109.163:36750.service: Deactivated successfully. Jan 13 21:45:19.755696 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:45:19.756482 systemd-logind[1781]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:45:19.757081 systemd[1]: Started sshd@28-86.109.11.45:22-147.75.109.163:36764.service - OpenSSH per-connection server daemon (147.75.109.163:36764). Jan 13 21:45:19.757633 systemd-logind[1781]: Removed session 20. Jan 13 21:45:19.788501 sshd[7962]: Accepted publickey for core from 147.75.109.163 port 36764 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:19.789306 sshd-session[7962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:19.792379 systemd-logind[1781]: New session 21 of user core. Jan 13 21:45:19.809238 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:45:19.989974 sshd[7964]: Connection closed by 147.75.109.163 port 36764 Jan 13 21:45:19.990367 sshd-session[7962]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:20.003685 systemd[1]: sshd@28-86.109.11.45:22-147.75.109.163:36764.service: Deactivated successfully. Jan 13 21:45:20.004512 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:45:20.005304 systemd-logind[1781]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:45:20.005979 systemd[1]: Started sshd@29-86.109.11.45:22-147.75.109.163:36772.service - OpenSSH per-connection server daemon (147.75.109.163:36772). Jan 13 21:45:20.006551 systemd-logind[1781]: Removed session 21. Jan 13 21:45:20.037409 sshd[7986]: Accepted publickey for core from 147.75.109.163 port 36772 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:20.038206 sshd-session[7986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:20.041534 systemd-logind[1781]: New session 22 of user core. Jan 13 21:45:20.061361 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:45:20.213431 sshd[7988]: Connection closed by 147.75.109.163 port 36772 Jan 13 21:45:20.213741 sshd-session[7986]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:20.216314 systemd[1]: sshd@29-86.109.11.45:22-147.75.109.163:36772.service: Deactivated successfully. Jan 13 21:45:20.217893 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:45:20.219122 systemd-logind[1781]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:45:20.220095 systemd-logind[1781]: Removed session 22. Jan 13 21:45:25.245330 systemd[1]: Started sshd@30-86.109.11.45:22-147.75.109.163:36784.service - OpenSSH per-connection server daemon (147.75.109.163:36784). Jan 13 21:45:25.274793 sshd[8034]: Accepted publickey for core from 147.75.109.163 port 36784 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:25.275933 sshd-session[8034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:25.280565 systemd-logind[1781]: New session 23 of user core. Jan 13 21:45:25.294446 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:45:25.389078 sshd[8036]: Connection closed by 147.75.109.163 port 36784 Jan 13 21:45:25.389235 sshd-session[8034]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:25.391201 systemd[1]: sshd@30-86.109.11.45:22-147.75.109.163:36784.service: Deactivated successfully. Jan 13 21:45:25.392118 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:45:25.392542 systemd-logind[1781]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:45:25.393015 systemd-logind[1781]: Removed session 23. Jan 13 21:45:30.404363 systemd[1]: Started sshd@31-86.109.11.45:22-147.75.109.163:55514.service - OpenSSH per-connection server daemon (147.75.109.163:55514). Jan 13 21:45:30.434286 sshd[8060]: Accepted publickey for core from 147.75.109.163 port 55514 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:30.437543 sshd-session[8060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:30.448642 systemd-logind[1781]: New session 24 of user core. Jan 13 21:45:30.465584 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:45:30.557995 sshd[8062]: Connection closed by 147.75.109.163 port 55514 Jan 13 21:45:30.558202 sshd-session[8060]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:30.559740 systemd[1]: sshd@31-86.109.11.45:22-147.75.109.163:55514.service: Deactivated successfully. Jan 13 21:45:30.560707 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:45:30.561467 systemd-logind[1781]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:45:30.561973 systemd-logind[1781]: Removed session 24. Jan 13 21:45:35.568646 systemd[1]: Started sshd@32-86.109.11.45:22-147.75.109.163:55522.service - OpenSSH per-connection server daemon (147.75.109.163:55522). Jan 13 21:45:35.600257 sshd[8091]: Accepted publickey for core from 147.75.109.163 port 55522 ssh2: RSA SHA256:dFT9fUI+BuTh81hS70NxbgIsj2oNr5wnje2V1H3NNmU Jan 13 21:45:35.601071 sshd-session[8091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:45:35.603936 systemd-logind[1781]: New session 25 of user core. Jan 13 21:45:35.612242 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:45:35.699243 sshd[8093]: Connection closed by 147.75.109.163 port 55522 Jan 13 21:45:35.699426 sshd-session[8091]: pam_unix(sshd:session): session closed for user core Jan 13 21:45:35.700938 systemd[1]: sshd@32-86.109.11.45:22-147.75.109.163:55522.service: Deactivated successfully. Jan 13 21:45:35.701837 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:45:35.702529 systemd-logind[1781]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:45:35.703110 systemd-logind[1781]: Removed session 25.