Feb 13 21:23:28.024496 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Feb 13 21:23:28.024511 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 21:23:28.024517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 21:23:28.024522 kernel: BIOS-provided physical RAM map: Feb 13 21:23:28.024526 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 21:23:28.024530 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 21:23:28.024535 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 21:23:28.024539 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 21:23:28.024543 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 21:23:28.024547 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b27fff] usable Feb 13 21:23:28.024551 kernel: BIOS-e820: [mem 0x0000000081b28000-0x0000000081b28fff] ACPI NVS Feb 13 21:23:28.024556 kernel: BIOS-e820: [mem 0x0000000081b29000-0x0000000081b29fff] reserved Feb 13 21:23:28.024560 kernel: BIOS-e820: [mem 0x0000000081b2a000-0x000000008afccfff] usable Feb 13 21:23:28.024564 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 21:23:28.024570 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 21:23:28.024574 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 21:23:28.024580 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 21:23:28.024585 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 21:23:28.024589 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 21:23:28.024594 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 21:23:28.024598 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 21:23:28.024603 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 21:23:28.024607 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 21:23:28.024612 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 21:23:28.024616 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 21:23:28.024621 kernel: NX (Execute Disable) protection: active Feb 13 21:23:28.024625 kernel: APIC: Static calls initialized Feb 13 21:23:28.024630 kernel: SMBIOS 3.2.1 present. Feb 13 21:23:28.024636 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 21:23:28.024640 kernel: tsc: Detected 3400.000 MHz processor Feb 13 21:23:28.024645 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 21:23:28.024649 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 21:23:28.024655 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 21:23:28.024659 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 21:23:28.024664 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 21:23:28.024669 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 21:23:28.024674 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 21:23:28.024679 kernel: Using GB pages for direct mapping Feb 13 21:23:28.024684 kernel: ACPI: Early table checksum verification disabled Feb 13 21:23:28.024689 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 21:23:28.024696 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 21:23:28.024701 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 21:23:28.024706 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 21:23:28.024711 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 21:23:28.024717 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 21:23:28.024722 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 21:23:28.024727 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 21:23:28.024732 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 21:23:28.024737 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 21:23:28.024742 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 21:23:28.024748 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 21:23:28.024754 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 21:23:28.024759 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024764 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 21:23:28.024769 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 21:23:28.024774 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024779 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024784 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 21:23:28.024789 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 21:23:28.024794 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024800 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024805 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 21:23:28.024810 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 21:23:28.024815 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 21:23:28.024820 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 21:23:28.024825 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 21:23:28.024830 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 21:23:28.024835 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 21:23:28.024840 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 21:23:28.024845 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 21:23:28.024850 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 21:23:28.024855 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 21:23:28.024860 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 21:23:28.024865 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 21:23:28.024871 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 21:23:28.024875 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 21:23:28.024880 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 21:23:28.024886 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 21:23:28.024891 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 21:23:28.024896 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 21:23:28.024901 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 21:23:28.024906 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 21:23:28.024911 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 21:23:28.024916 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 21:23:28.024921 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 21:23:28.024926 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 21:23:28.024932 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 21:23:28.024937 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 21:23:28.024942 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 21:23:28.024947 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 21:23:28.024952 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 21:23:28.024956 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 21:23:28.024961 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 21:23:28.024966 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 21:23:28.024971 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 21:23:28.024977 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 21:23:28.024982 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 21:23:28.024987 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 21:23:28.024992 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 21:23:28.024997 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 21:23:28.025002 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 21:23:28.025007 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 21:23:28.025012 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 21:23:28.025017 kernel: No NUMA configuration found Feb 13 21:23:28.025022 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 21:23:28.025028 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 21:23:28.025033 kernel: Zone ranges: Feb 13 21:23:28.025038 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 21:23:28.025043 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 21:23:28.025048 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 21:23:28.025053 kernel: Movable zone start for each node Feb 13 21:23:28.025058 kernel: Early memory node ranges Feb 13 21:23:28.025063 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 21:23:28.025068 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 21:23:28.025073 kernel: node 0: [mem 0x0000000040400000-0x0000000081b27fff] Feb 13 21:23:28.025079 kernel: node 0: [mem 0x0000000081b2a000-0x000000008afccfff] Feb 13 21:23:28.025084 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 21:23:28.025089 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 21:23:28.025098 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 21:23:28.025106 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 21:23:28.025111 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 21:23:28.025117 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 21:23:28.025123 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 21:23:28.025129 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 21:23:28.025134 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 21:23:28.025139 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 21:23:28.025145 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 21:23:28.025150 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 21:23:28.025155 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 21:23:28.025161 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 21:23:28.025166 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 21:23:28.025173 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 21:23:28.025178 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 21:23:28.025183 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 21:23:28.025188 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 21:23:28.025194 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 21:23:28.025199 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 21:23:28.025205 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 21:23:28.025210 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 21:23:28.025215 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 21:23:28.025220 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 21:23:28.025227 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 21:23:28.025232 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 21:23:28.025237 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 21:23:28.025242 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 21:23:28.025248 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 21:23:28.025253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 21:23:28.025258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 21:23:28.025264 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 21:23:28.025269 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 21:23:28.025276 kernel: TSC deadline timer available Feb 13 21:23:28.025281 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 21:23:28.025286 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 21:23:28.025292 kernel: Booting paravirtualized kernel on bare hardware Feb 13 21:23:28.025297 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 21:23:28.025303 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 21:23:28.025308 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 21:23:28.025314 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 21:23:28.025319 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 21:23:28.025326 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 21:23:28.025332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 21:23:28.025337 kernel: random: crng init done Feb 13 21:23:28.025342 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 21:23:28.025348 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 21:23:28.025353 kernel: Fallback order for Node 0: 0 Feb 13 21:23:28.025358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 21:23:28.025364 kernel: Policy zone: Normal Feb 13 21:23:28.025370 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 21:23:28.025375 kernel: software IO TLB: area num 16. Feb 13 21:23:28.025381 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 732420K reserved, 0K cma-reserved) Feb 13 21:23:28.025387 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 21:23:28.025392 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 21:23:28.025398 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 21:23:28.025403 kernel: Dynamic Preempt: voluntary Feb 13 21:23:28.025408 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 21:23:28.025414 kernel: rcu: RCU event tracing is enabled. Feb 13 21:23:28.025420 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 21:23:28.025426 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 21:23:28.025431 kernel: Rude variant of Tasks RCU enabled. Feb 13 21:23:28.025437 kernel: Tracing variant of Tasks RCU enabled. Feb 13 21:23:28.025442 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 21:23:28.025447 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 21:23:28.025453 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 21:23:28.025458 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 21:23:28.025464 kernel: Console: colour dummy device 80x25 Feb 13 21:23:28.025470 kernel: printk: console [tty0] enabled Feb 13 21:23:28.025475 kernel: printk: console [ttyS1] enabled Feb 13 21:23:28.025480 kernel: ACPI: Core revision 20230628 Feb 13 21:23:28.025486 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 21:23:28.025491 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 21:23:28.025497 kernel: DMAR: Host address width 39 Feb 13 21:23:28.025502 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 21:23:28.025507 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 21:23:28.025513 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 21:23:28.025519 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 21:23:28.025524 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 21:23:28.025530 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 21:23:28.025535 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 21:23:28.025540 kernel: x2apic enabled Feb 13 21:23:28.025546 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 21:23:28.025551 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 21:23:28.025557 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 21:23:28.025562 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 21:23:28.025569 kernel: process: using mwait in idle threads Feb 13 21:23:28.025574 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 21:23:28.025579 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 21:23:28.025585 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 21:23:28.025590 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 21:23:28.025595 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 21:23:28.025600 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 21:23:28.025606 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 21:23:28.025611 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 21:23:28.025616 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 21:23:28.025622 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 21:23:28.025628 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 21:23:28.025633 kernel: TAA: Mitigation: TSX disabled Feb 13 21:23:28.025639 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 21:23:28.025644 kernel: SRBDS: Mitigation: Microcode Feb 13 21:23:28.025650 kernel: GDS: Mitigation: Microcode Feb 13 21:23:28.025655 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 21:23:28.025660 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 21:23:28.025666 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 21:23:28.025671 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 21:23:28.025676 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 21:23:28.025682 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 21:23:28.025688 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 21:23:28.025693 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 21:23:28.025699 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 21:23:28.025704 kernel: Freeing SMP alternatives memory: 32K Feb 13 21:23:28.025709 kernel: pid_max: default: 32768 minimum: 301 Feb 13 21:23:28.025714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 21:23:28.025720 kernel: landlock: Up and running. Feb 13 21:23:28.025725 kernel: SELinux: Initializing. Feb 13 21:23:28.025730 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.025736 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.025741 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 21:23:28.025747 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:28.025753 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:28.025759 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:28.025764 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 21:23:28.025769 kernel: ... version: 4 Feb 13 21:23:28.025775 kernel: ... bit width: 48 Feb 13 21:23:28.025780 kernel: ... generic registers: 4 Feb 13 21:23:28.025786 kernel: ... value mask: 0000ffffffffffff Feb 13 21:23:28.025791 kernel: ... max period: 00007fffffffffff Feb 13 21:23:28.025797 kernel: ... fixed-purpose events: 3 Feb 13 21:23:28.025803 kernel: ... event mask: 000000070000000f Feb 13 21:23:28.025808 kernel: signal: max sigframe size: 2032 Feb 13 21:23:28.025814 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 21:23:28.025819 kernel: rcu: Hierarchical SRCU implementation. Feb 13 21:23:28.025824 kernel: rcu: Max phase no-delay instances is 400. Feb 13 21:23:28.025830 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 21:23:28.025835 kernel: smp: Bringing up secondary CPUs ... Feb 13 21:23:28.025840 kernel: smpboot: x86: Booting SMP configuration: Feb 13 21:23:28.025847 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 21:23:28.025852 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 21:23:28.025858 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 21:23:28.025863 kernel: smpboot: Max logical packages: 1 Feb 13 21:23:28.025869 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 21:23:28.025874 kernel: devtmpfs: initialized Feb 13 21:23:28.025879 kernel: x86/mm: Memory block size: 128MB Feb 13 21:23:28.025885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b28000-0x81b28fff] (4096 bytes) Feb 13 21:23:28.025890 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 21:23:28.025896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 21:23:28.025902 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 21:23:28.025907 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 21:23:28.025913 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 21:23:28.025918 kernel: audit: initializing netlink subsys (disabled) Feb 13 21:23:28.025923 kernel: audit: type=2000 audit(1739481802.039:1): state=initialized audit_enabled=0 res=1 Feb 13 21:23:28.025928 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 21:23:28.025934 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 21:23:28.025939 kernel: cpuidle: using governor menu Feb 13 21:23:28.025945 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 21:23:28.025951 kernel: dca service started, version 1.12.1 Feb 13 21:23:28.025956 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 21:23:28.025961 kernel: PCI: Using configuration type 1 for base access Feb 13 21:23:28.025967 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 21:23:28.025972 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 21:23:28.025978 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 21:23:28.025983 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 21:23:28.025989 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 21:23:28.025995 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 21:23:28.026000 kernel: ACPI: Added _OSI(Module Device) Feb 13 21:23:28.026005 kernel: ACPI: Added _OSI(Processor Device) Feb 13 21:23:28.026010 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 21:23:28.026016 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 21:23:28.026021 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 21:23:28.026026 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026032 kernel: ACPI: SSDT 0xFFFF8FA900FAF400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 21:23:28.026037 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026043 kernel: ACPI: SSDT 0xFFFF8FA900F9C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 21:23:28.026049 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026054 kernel: ACPI: SSDT 0xFFFF8FA900F86A00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 21:23:28.026060 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026065 kernel: ACPI: SSDT 0xFFFF8FA900F9F000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 21:23:28.026070 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026075 kernel: ACPI: SSDT 0xFFFF8FA900FA1000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 21:23:28.026081 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026086 kernel: ACPI: SSDT 0xFFFF8FA900FADC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 21:23:28.026092 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 21:23:28.026098 kernel: ACPI: Interpreter enabled Feb 13 21:23:28.026122 kernel: ACPI: PM: (supports S0 S5) Feb 13 21:23:28.026128 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 21:23:28.026147 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 21:23:28.026152 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 21:23:28.026157 kernel: HEST: Table parsing has been initialized. Feb 13 21:23:28.026163 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 21:23:28.026168 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 21:23:28.026174 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 21:23:28.026180 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 21:23:28.026185 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 21:23:28.026191 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 21:23:28.026196 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 21:23:28.026201 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 21:23:28.026207 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 21:23:28.026212 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 21:23:28.026218 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 21:23:28.026224 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 21:23:28.026229 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 21:23:28.026234 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 21:23:28.026240 kernel: ACPI: \PIN_: New power resource Feb 13 21:23:28.026245 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 21:23:28.026319 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 21:23:28.026370 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 21:23:28.026417 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 21:23:28.026426 kernel: PCI host bridge to bus 0000:00 Feb 13 21:23:28.026473 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 21:23:28.026515 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 21:23:28.026556 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 21:23:28.026597 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 21:23:28.026637 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 21:23:28.026678 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 21:23:28.026735 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 21:23:28.026791 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 21:23:28.026839 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.026890 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 21:23:28.026936 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 21:23:28.026986 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 21:23:28.027034 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 21:23:28.027086 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 21:23:28.027135 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 21:23:28.027183 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 21:23:28.027233 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 21:23:28.027280 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 21:23:28.027329 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 21:23:28.027379 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 21:23:28.027426 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:23:28.027478 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 21:23:28.027525 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:23:28.027575 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 21:23:28.027624 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 21:23:28.027672 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 21:23:28.027728 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 21:23:28.027775 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 21:23:28.027822 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 21:23:28.027871 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 21:23:28.027917 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 21:23:28.027966 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 21:23:28.028015 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 21:23:28.028064 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 21:23:28.028112 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 21:23:28.028197 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 21:23:28.028242 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 21:23:28.028289 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 21:23:28.028337 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 21:23:28.028383 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 21:23:28.028434 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 21:23:28.028483 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028538 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 21:23:28.028585 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028634 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 21:23:28.028682 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028732 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 21:23:28.028781 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028832 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 21:23:28.028879 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028929 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 21:23:28.028975 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:23:28.029025 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 21:23:28.029076 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 21:23:28.029161 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 21:23:28.029209 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 21:23:28.029262 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 21:23:28.029308 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 21:23:28.029360 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 21:23:28.029407 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 21:23:28.029458 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 21:23:28.029506 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 21:23:28.029553 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:23:28.029601 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:23:28.029653 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 21:23:28.029702 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 21:23:28.029750 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 21:23:28.029798 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 21:23:28.029846 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:23:28.029893 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:23:28.029941 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:23:28.029987 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 21:23:28.030034 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:23:28.030080 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 21:23:28.030137 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 21:23:28.030189 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:23:28.030236 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 21:23:28.030284 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 21:23:28.030330 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 21:23:28.030378 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.030425 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 21:23:28.030473 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:23:28.030520 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 21:23:28.030573 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 21:23:28.030622 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:23:28.030669 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 21:23:28.030718 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 21:23:28.030765 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 21:23:28.030813 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.030863 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 21:23:28.030910 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:23:28.030957 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 21:23:28.031004 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 21:23:28.031057 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 21:23:28.031108 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 21:23:28.031158 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 21:23:28.031206 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:23:28.031257 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 21:23:28.031303 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.031351 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.031407 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 21:23:28.031461 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 21:23:28.031512 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 21:23:28.031562 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 21:23:28.031614 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 21:23:28.031663 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 21:23:28.031712 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 21:23:28.031763 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:23:28.031810 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 21:23:28.031859 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.031906 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.031915 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 21:23:28.031921 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 21:23:28.031927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 21:23:28.031933 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 21:23:28.031939 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 21:23:28.031944 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 21:23:28.031950 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 21:23:28.031956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 21:23:28.031961 kernel: iommu: Default domain type: Translated Feb 13 21:23:28.031968 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 21:23:28.031974 kernel: PCI: Using ACPI for IRQ routing Feb 13 21:23:28.031979 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 21:23:28.031985 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 21:23:28.031991 kernel: e820: reserve RAM buffer [mem 0x81b28000-0x83ffffff] Feb 13 21:23:28.031996 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 21:23:28.032002 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 21:23:28.032007 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 21:23:28.032013 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 21:23:28.032063 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 21:23:28.032132 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 21:23:28.032197 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 21:23:28.032205 kernel: vgaarb: loaded Feb 13 21:23:28.032211 kernel: clocksource: Switched to clocksource tsc-early Feb 13 21:23:28.032217 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 21:23:28.032222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 21:23:28.032228 kernel: pnp: PnP ACPI init Feb 13 21:23:28.032276 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 21:23:28.032324 kernel: pnp 00:02: [dma 0 disabled] Feb 13 21:23:28.032370 kernel: pnp 00:03: [dma 0 disabled] Feb 13 21:23:28.032418 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 21:23:28.032462 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 21:23:28.032507 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 21:23:28.032553 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 21:23:28.032597 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 21:23:28.032640 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 21:23:28.032681 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 21:23:28.032728 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 21:23:28.032771 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 21:23:28.032814 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 21:23:28.032856 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 21:23:28.032905 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 21:23:28.032947 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 21:23:28.032990 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 21:23:28.033032 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 21:23:28.033073 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 21:23:28.033135 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 21:23:28.033193 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 21:23:28.033239 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 21:23:28.033247 kernel: pnp: PnP ACPI: found 10 devices Feb 13 21:23:28.033253 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 21:23:28.033259 kernel: NET: Registered PF_INET protocol family Feb 13 21:23:28.033265 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:23:28.033271 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 21:23:28.033277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 21:23:28.033283 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:23:28.033290 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 21:23:28.033296 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 21:23:28.033302 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.033308 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.033313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 21:23:28.033319 kernel: NET: Registered PF_XDP protocol family Feb 13 21:23:28.033366 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 21:23:28.033413 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 21:23:28.033462 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 21:23:28.033511 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033558 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033607 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033654 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033700 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:23:28.033747 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 21:23:28.033793 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:23:28.033842 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 21:23:28.033888 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 21:23:28.033935 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:23:28.033981 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 21:23:28.034028 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 21:23:28.034076 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:23:28.034159 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 21:23:28.034205 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 21:23:28.034253 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 21:23:28.034300 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.034348 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.034395 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 21:23:28.034440 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.034487 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.034532 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 21:23:28.034574 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 21:23:28.034615 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 21:23:28.034656 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 21:23:28.034697 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 21:23:28.034738 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 21:23:28.034783 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 21:23:28.034829 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:23:28.034879 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 21:23:28.034922 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 21:23:28.034969 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 21:23:28.035013 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 21:23:28.035059 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 21:23:28.035107 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 21:23:28.035188 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 21:23:28.035234 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 21:23:28.035242 kernel: PCI: CLS 64 bytes, default 64 Feb 13 21:23:28.035248 kernel: DMAR: No ATSR found Feb 13 21:23:28.035253 kernel: DMAR: No SATC found Feb 13 21:23:28.035259 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 21:23:28.035306 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 21:23:28.035355 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 21:23:28.035402 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 21:23:28.035449 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 21:23:28.035495 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 21:23:28.035541 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 21:23:28.035588 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 21:23:28.035633 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 21:23:28.035680 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 21:23:28.035726 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 21:23:28.035775 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 21:23:28.035820 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 21:23:28.035867 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 21:23:28.035913 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 21:23:28.035960 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 21:23:28.036007 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 21:23:28.036053 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 21:23:28.036101 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 21:23:28.036182 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 21:23:28.036229 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 21:23:28.036274 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 21:23:28.036323 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 21:23:28.036370 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 21:23:28.036418 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 21:23:28.036465 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 21:23:28.036513 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 21:23:28.036564 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 21:23:28.036572 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 21:23:28.036578 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 21:23:28.036585 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 21:23:28.036590 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 21:23:28.036596 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 21:23:28.036602 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 21:23:28.036608 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 21:23:28.036656 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 21:23:28.036666 kernel: Initialise system trusted keyrings Feb 13 21:23:28.036671 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 21:23:28.036677 kernel: Key type asymmetric registered Feb 13 21:23:28.036683 kernel: Asymmetric key parser 'x509' registered Feb 13 21:23:28.036688 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 21:23:28.036694 kernel: io scheduler mq-deadline registered Feb 13 21:23:28.036700 kernel: io scheduler kyber registered Feb 13 21:23:28.036705 kernel: io scheduler bfq registered Feb 13 21:23:28.036753 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 21:23:28.036799 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 21:23:28.036846 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 21:23:28.036891 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 21:23:28.036938 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 21:23:28.036984 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 21:23:28.037037 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 21:23:28.037047 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 21:23:28.037053 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 21:23:28.037059 kernel: pstore: Using crash dump compression: deflate Feb 13 21:23:28.037065 kernel: pstore: Registered erst as persistent store backend Feb 13 21:23:28.037070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 21:23:28.037076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 21:23:28.037082 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 21:23:28.037088 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 21:23:28.037093 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 21:23:28.037184 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 21:23:28.037193 kernel: i8042: PNP: No PS/2 controller found. Feb 13 21:23:28.037236 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 21:23:28.037279 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 21:23:28.037322 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T21:23:26 UTC (1739481806) Feb 13 21:23:28.037366 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 21:23:28.037374 kernel: intel_pstate: Intel P-state driver initializing Feb 13 21:23:28.037380 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 21:23:28.037387 kernel: intel_pstate: HWP enabled Feb 13 21:23:28.037393 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 21:23:28.037399 kernel: vesafb: scrolling: redraw Feb 13 21:23:28.037404 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 21:23:28.037410 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000031ebe921, using 768k, total 768k Feb 13 21:23:28.037416 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 21:23:28.037422 kernel: fb0: VESA VGA frame buffer device Feb 13 21:23:28.037427 kernel: NET: Registered PF_INET6 protocol family Feb 13 21:23:28.037433 kernel: Segment Routing with IPv6 Feb 13 21:23:28.037440 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 21:23:28.037445 kernel: NET: Registered PF_PACKET protocol family Feb 13 21:23:28.037451 kernel: Key type dns_resolver registered Feb 13 21:23:28.037457 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 21:23:28.037462 kernel: IPI shorthand broadcast: enabled Feb 13 21:23:28.037468 kernel: sched_clock: Marking stable (2476125858, 1385628491)->(4406112287, -544357938) Feb 13 21:23:28.037474 kernel: registered taskstats version 1 Feb 13 21:23:28.037480 kernel: Loading compiled-in X.509 certificates Feb 13 21:23:28.037485 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 21:23:28.037492 kernel: Key type .fscrypt registered Feb 13 21:23:28.037497 kernel: Key type fscrypt-provisioning registered Feb 13 21:23:28.037503 kernel: ima: Allocated hash algorithm: sha1 Feb 13 21:23:28.037509 kernel: ima: No architecture policies found Feb 13 21:23:28.037514 kernel: clk: Disabling unused clocks Feb 13 21:23:28.037520 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 21:23:28.037526 kernel: Write protecting the kernel read-only data: 36864k Feb 13 21:23:28.037531 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 21:23:28.037538 kernel: Run /init as init process Feb 13 21:23:28.037544 kernel: with arguments: Feb 13 21:23:28.037549 kernel: /init Feb 13 21:23:28.037555 kernel: with environment: Feb 13 21:23:28.037560 kernel: HOME=/ Feb 13 21:23:28.037566 kernel: TERM=linux Feb 13 21:23:28.037572 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 21:23:28.037579 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:23:28.037587 systemd[1]: Detected architecture x86-64. Feb 13 21:23:28.037593 systemd[1]: Running in initrd. Feb 13 21:23:28.037599 systemd[1]: No hostname configured, using default hostname. Feb 13 21:23:28.037605 systemd[1]: Hostname set to . Feb 13 21:23:28.037611 systemd[1]: Initializing machine ID from random generator. Feb 13 21:23:28.037617 systemd[1]: Queued start job for default target initrd.target. Feb 13 21:23:28.037623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:28.037629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:28.037636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 21:23:28.037642 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:23:28.037648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 21:23:28.037655 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 21:23:28.037661 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 21:23:28.037668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 21:23:28.037673 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 21:23:28.037680 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 21:23:28.037686 kernel: clocksource: Switched to clocksource tsc Feb 13 21:23:28.037692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:28.037698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:28.037704 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:23:28.037710 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:23:28.037716 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:23:28.037722 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:23:28.037728 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:23:28.037735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:23:28.037741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 21:23:28.037747 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 21:23:28.037753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:28.037759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:28.037765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:28.037771 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:23:28.037777 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 21:23:28.037784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:23:28.037790 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 21:23:28.037796 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 21:23:28.037802 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:23:28.037817 systemd-journald[266]: Collecting audit messages is disabled. Feb 13 21:23:28.037833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:23:28.037839 systemd-journald[266]: Journal started Feb 13 21:23:28.037853 systemd-journald[266]: Runtime Journal (/run/log/journal/3e237711bf0b4acc85618f353d3ac97a) is 8.0M, max 639.9M, 631.9M free. Feb 13 21:23:28.061192 systemd-modules-load[268]: Inserted module 'overlay' Feb 13 21:23:28.083115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:28.111714 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 21:23:28.183340 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:23:28.183356 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 21:23:28.183366 kernel: Bridge firewalling registered Feb 13 21:23:28.168287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:28.173301 systemd-modules-load[268]: Inserted module 'br_netfilter' Feb 13 21:23:28.195413 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 21:23:28.215449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:28.233464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:28.267441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:28.273214 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:23:28.305567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:23:28.312902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:23:28.316434 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:23:28.317404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:23:28.317763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:28.321765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:28.323352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:23:28.326033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:28.326518 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:28.356831 systemd-resolved[303]: Positive Trust Anchors: Feb 13 21:23:28.356840 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:23:28.412247 dracut-cmdline[307]: dracut-dracut-053 Feb 13 21:23:28.412247 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 21:23:28.356878 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:23:28.537184 kernel: SCSI subsystem initialized Feb 13 21:23:28.537199 kernel: Loading iSCSI transport class v2.0-870. Feb 13 21:23:28.359334 systemd-resolved[303]: Defaulting to hostname 'linux'. Feb 13 21:23:28.370373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 21:23:28.569195 kernel: iscsi: registered transport (tcp) Feb 13 21:23:28.381355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:23:28.400436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:28.612228 kernel: iscsi: registered transport (qla4xxx) Feb 13 21:23:28.612243 kernel: QLogic iSCSI HBA Driver Feb 13 21:23:28.613338 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 21:23:28.634410 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 21:23:28.690961 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 21:23:28.690979 kernel: device-mapper: uevent: version 1.0.3 Feb 13 21:23:28.722147 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 21:23:28.768158 kernel: raid6: avx2x4 gen() 53276 MB/s Feb 13 21:23:28.800160 kernel: raid6: avx2x2 gen() 55092 MB/s Feb 13 21:23:28.836536 kernel: raid6: avx2x1 gen() 46226 MB/s Feb 13 21:23:28.836554 kernel: raid6: using algorithm avx2x2 gen() 55092 MB/s Feb 13 21:23:28.883610 kernel: raid6: .... xor() 32015 MB/s, rmw enabled Feb 13 21:23:28.883631 kernel: raid6: using avx2x2 recovery algorithm Feb 13 21:23:28.925138 kernel: xor: automatically using best checksumming function avx Feb 13 21:23:29.040117 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 21:23:29.046045 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:23:29.062415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:29.072435 systemd-udevd[494]: Using default interface naming scheme 'v255'. Feb 13 21:23:29.085448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:29.111304 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 21:23:29.153664 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Feb 13 21:23:29.170386 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:23:29.198468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:23:29.256967 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:29.289613 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 21:23:29.289629 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 21:23:29.300138 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 21:23:29.315281 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 21:23:29.392235 kernel: ACPI: bus type USB registered Feb 13 21:23:29.392256 kernel: usbcore: registered new interface driver usbfs Feb 13 21:23:29.392264 kernel: usbcore: registered new interface driver hub Feb 13 21:23:29.392272 kernel: usbcore: registered new device driver usb Feb 13 21:23:29.392279 kernel: PTP clock support registered Feb 13 21:23:29.392286 kernel: libata version 3.00 loaded. Feb 13 21:23:29.392294 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 21:23:29.329304 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 21:23:29.466206 kernel: AES CTR mode by8 optimization enabled Feb 13 21:23:29.466231 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:23:29.576077 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 21:23:29.576162 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 21:23:29.734423 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 21:23:29.734497 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 21:23:29.734563 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:23:29.734624 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 21:23:29.734686 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 21:23:29.734751 kernel: scsi host0: ahci Feb 13 21:23:29.734814 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 21:23:29.734875 kernel: scsi host1: ahci Feb 13 21:23:29.734933 kernel: hub 1-0:1.0: USB hub found Feb 13 21:23:29.735004 kernel: scsi host2: ahci Feb 13 21:23:29.735062 kernel: hub 1-0:1.0: 16 ports detected Feb 13 21:23:29.735136 kernel: scsi host3: ahci Feb 13 21:23:29.735201 kernel: hub 2-0:1.0: USB hub found Feb 13 21:23:29.735269 kernel: scsi host4: ahci Feb 13 21:23:29.735327 kernel: hub 2-0:1.0: 10 ports detected Feb 13 21:23:29.735391 kernel: scsi host5: ahci Feb 13 21:23:29.735449 kernel: scsi host6: ahci Feb 13 21:23:29.735507 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 21:23:29.735516 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 21:23:29.735523 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 21:23:29.735531 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 21:23:29.735538 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 21:23:29.735545 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 21:23:29.735553 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 21:23:29.444346 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:23:29.917941 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 21:23:29.917963 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 21:23:29.917972 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 21:23:30.380403 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 21:23:30.380417 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:23:30.380495 kernel: pps pps0: new PPS source ptp0 Feb 13 21:23:30.380562 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 21:23:30.380628 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:23:30.380690 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e5:56 Feb 13 21:23:30.380750 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 21:23:30.380810 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:23:30.380869 kernel: hub 1-14:1.0: USB hub found Feb 13 21:23:30.380952 kernel: hub 1-14:1.0: 4 ports detected Feb 13 21:23:30.381019 kernel: pps pps1: new PPS source ptp1 Feb 13 21:23:30.381076 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 21:23:30.381189 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:23:30.381197 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:23:30.381258 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381267 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e5:57 Feb 13 21:23:30.381326 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:23:30.381336 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 21:23:30.381395 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381403 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:23:30.381461 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381469 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:23:30.381528 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381536 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 21:23:30.381594 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:23:30.381604 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381611 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:23:30.381618 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:23:30.381626 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:23:30.381633 kernel: ata2.00: Features: NCQ-prio Feb 13 21:23:30.381640 kernel: ata1.00: Features: NCQ-prio Feb 13 21:23:30.381647 kernel: ata2.00: configured for UDMA/133 Feb 13 21:23:30.381654 kernel: ata1.00: configured for UDMA/133 Feb 13 21:23:30.381661 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:23:30.381723 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:23:30.381740 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 21:23:30.381752 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 21:23:30.946654 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:23:31.135297 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:23:31.135384 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 21:23:31.135479 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 21:23:31.135496 kernel: usbcore: registered new interface driver usbhid Feb 13 21:23:31.135510 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 21:23:31.135617 kernel: usbhid: USB HID core driver Feb 13 21:23:31.135632 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 21:23:31.135645 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.135654 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:23:31.135734 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:23:31.135749 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 21:23:31.135852 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:23:31.135929 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 21:23:31.135994 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 21:23:31.136056 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 21:23:31.136123 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 21:23:31.136184 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 21:23:31.136255 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 21:23:31.136266 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 21:23:31.136338 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:23:31.136401 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 21:23:31.136465 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 21:23:31.136530 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:23:31.136594 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.136604 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:23:31.136672 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 21:23:31.136735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 21:23:31.136744 kernel: GPT:9289727 != 937703087 Feb 13 21:23:31.136751 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 21:23:31.136758 kernel: GPT:9289727 != 937703087 Feb 13 21:23:31.136765 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 21:23:31.136772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:31.136780 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 21:23:31.136842 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 21:23:31.136906 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:23:31.136913 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:23:31.136976 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 21:23:31.137035 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 21:23:29.496481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:31.178227 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (563) Feb 13 21:23:29.725449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:23:31.256350 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 21:23:31.256524 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (577) Feb 13 21:23:29.901315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 21:23:30.004359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:23:30.004388 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:30.264440 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:30.282166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:23:30.282195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:31.406200 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.406215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:30.301193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:31.427219 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:30.340367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:31.447251 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:30.356040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:23:31.466184 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.466195 disk-uuid[721]: Primary Header is updated. Feb 13 21:23:31.466195 disk-uuid[721]: Secondary Entries is updated. Feb 13 21:23:31.466195 disk-uuid[721]: Secondary Header is updated. Feb 13 21:23:31.508194 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:30.487660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:30.615267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:31.158410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:31.238487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 21:23:31.271908 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 21:23:31.292861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:23:31.310319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:23:31.314072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 21:23:31.343314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 21:23:32.470219 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:32.491086 disk-uuid[722]: The operation has completed successfully. Feb 13 21:23:32.500193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:32.525195 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 21:23:32.525272 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 21:23:32.558428 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 21:23:32.597224 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 21:23:32.597289 sh[739]: Success Feb 13 21:23:32.631835 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 21:23:32.658573 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 21:23:32.660083 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 21:23:32.730117 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 21:23:32.730156 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:32.759267 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 21:23:32.778722 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 21:23:32.796991 kernel: BTRFS info (device dm-0): using free space tree Feb 13 21:23:32.834105 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 21:23:32.835669 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 21:23:32.844582 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 21:23:32.856564 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 21:23:32.989704 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:32.989723 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:32.989730 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:23:32.989737 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:23:32.989745 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:23:32.989751 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:32.990226 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 21:23:33.001660 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 21:23:33.032516 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 21:23:33.043506 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:23:33.080227 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:23:33.091173 systemd-networkd[922]: lo: Link UP Feb 13 21:23:33.091175 systemd-networkd[922]: lo: Gained carrier Feb 13 21:23:33.108273 ignition[896]: Ignition 2.19.0 Feb 13 21:23:33.093509 systemd-networkd[922]: Enumeration completed Feb 13 21:23:33.108277 ignition[896]: Stage: fetch-offline Feb 13 21:23:33.093561 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:23:33.108300 ignition[896]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:33.094214 systemd-networkd[922]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.108305 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:33.110244 systemd[1]: Reached target network.target - Network. Feb 13 21:23:33.108361 ignition[896]: parsed url from cmdline: "" Feb 13 21:23:33.110351 unknown[896]: fetched base config from "system" Feb 13 21:23:33.108363 ignition[896]: no config URL provided Feb 13 21:23:33.110355 unknown[896]: fetched user config from "system" Feb 13 21:23:33.108366 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:23:33.120975 systemd-networkd[922]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.108388 ignition[896]: parsing config with SHA512: b1549dea9d99ca6212b8264b9694d625c49735e8dd1a1365c5db1b82aabd09bb5f816f712ccaa0a43d23ea1461f5d4efe324f777d6a70efb97b7b19d5a18c5bf Feb 13 21:23:33.131441 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:23:33.110567 ignition[896]: fetch-offline: fetch-offline passed Feb 13 21:23:33.149210 systemd-networkd[922]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.110569 ignition[896]: POST message to Packet Timeline Feb 13 21:23:33.155274 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 21:23:33.110572 ignition[896]: POST Status error: resource requires networking Feb 13 21:23:33.170500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 21:23:33.110606 ignition[896]: Ignition finished successfully Feb 13 21:23:33.358308 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 21:23:33.352527 systemd-networkd[922]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.208631 ignition[934]: Ignition 2.19.0 Feb 13 21:23:33.208641 ignition[934]: Stage: kargs Feb 13 21:23:33.208903 ignition[934]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:33.208920 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:33.210409 ignition[934]: kargs: kargs passed Feb 13 21:23:33.210415 ignition[934]: POST message to Packet Timeline Feb 13 21:23:33.210436 ignition[934]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:33.211482 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36741->[::1]:53: read: connection refused Feb 13 21:23:33.411563 ignition[934]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 21:23:33.412568 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36518->[::1]:53: read: connection refused Feb 13 21:23:33.593195 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 21:23:33.594084 systemd-networkd[922]: eno1: Link UP Feb 13 21:23:33.594260 systemd-networkd[922]: eno2: Link UP Feb 13 21:23:33.594387 systemd-networkd[922]: enp1s0f0np0: Link UP Feb 13 21:23:33.594540 systemd-networkd[922]: enp1s0f0np0: Gained carrier Feb 13 21:23:33.603242 systemd-networkd[922]: enp1s0f1np1: Link UP Feb 13 21:23:33.638266 systemd-networkd[922]: enp1s0f0np0: DHCPv4 address 147.28.180.221/31, gateway 147.28.180.220 acquired from 145.40.83.140 Feb 13 21:23:33.812948 ignition[934]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 21:23:33.814038 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55429->[::1]:53: read: connection refused Feb 13 21:23:34.369864 systemd-networkd[922]: enp1s0f1np1: Gained carrier Feb 13 21:23:34.614560 ignition[934]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 21:23:34.615743 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39983->[::1]:53: read: connection refused Feb 13 21:23:34.945723 systemd-networkd[922]: enp1s0f0np0: Gained IPv6LL Feb 13 21:23:36.097702 systemd-networkd[922]: enp1s0f1np1: Gained IPv6LL Feb 13 21:23:36.217211 ignition[934]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 21:23:36.218642 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37711->[::1]:53: read: connection refused Feb 13 21:23:39.421834 ignition[934]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 21:23:40.102470 ignition[934]: GET result: OK Feb 13 21:23:41.040287 ignition[934]: Ignition finished successfully Feb 13 21:23:41.045436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 21:23:41.069379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 21:23:41.075799 ignition[952]: Ignition 2.19.0 Feb 13 21:23:41.075803 ignition[952]: Stage: disks Feb 13 21:23:41.075902 ignition[952]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:41.075909 ignition[952]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:41.076441 ignition[952]: disks: disks passed Feb 13 21:23:41.076444 ignition[952]: POST message to Packet Timeline Feb 13 21:23:41.076453 ignition[952]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:41.618023 ignition[952]: GET result: OK Feb 13 21:23:41.992645 ignition[952]: Ignition finished successfully Feb 13 21:23:41.995185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 21:23:42.011454 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 21:23:42.030388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 21:23:42.051374 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:23:42.072404 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:23:42.092402 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:23:42.120336 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 21:23:42.153518 systemd-fsck[969]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 21:23:42.165149 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 21:23:42.177403 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 21:23:42.293982 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 21:23:42.308356 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 21:23:42.294321 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 21:23:42.331331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:23:42.340061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 21:23:42.458394 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (978) Feb 13 21:23:42.458407 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:42.458439 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:42.458456 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:23:42.458470 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:23:42.458485 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:23:42.378772 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 21:23:42.482474 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 21:23:42.495153 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 21:23:42.536270 coreos-metadata[980]: Feb 13 21:23:42.530 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:23:42.495174 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:23:42.577306 coreos-metadata[996]: Feb 13 21:23:42.530 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:23:42.519084 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:23:42.545375 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 21:23:42.575351 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 21:23:42.627200 initrd-setup-root[1011]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 21:23:42.637236 initrd-setup-root[1018]: cut: /sysroot/etc/group: No such file or directory Feb 13 21:23:42.647208 initrd-setup-root[1025]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 21:23:42.657204 initrd-setup-root[1032]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 21:23:42.660946 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 21:23:42.685316 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 21:23:42.687377 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 21:23:42.733375 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:42.724865 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 21:23:42.747304 ignition[1103]: INFO : Ignition 2.19.0 Feb 13 21:23:42.747304 ignition[1103]: INFO : Stage: mount Feb 13 21:23:42.754276 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:42.754276 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:42.754276 ignition[1103]: INFO : mount: mount passed Feb 13 21:23:42.754276 ignition[1103]: INFO : POST message to Packet Timeline Feb 13 21:23:42.754276 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:42.749866 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 21:23:43.134057 coreos-metadata[980]: Feb 13 21:23:43.133 INFO Fetch successful Feb 13 21:23:43.166214 coreos-metadata[980]: Feb 13 21:23:43.166 INFO wrote hostname ci-4081.3.1-a-e8b80a8c0e to /sysroot/etc/hostname Feb 13 21:23:43.167390 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:23:43.211760 coreos-metadata[996]: Feb 13 21:23:43.211 INFO Fetch successful Feb 13 21:23:43.282785 ignition[1103]: INFO : GET result: OK Feb 13 21:23:43.285425 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 21:23:43.285498 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 21:23:43.705306 ignition[1103]: INFO : Ignition finished successfully Feb 13 21:23:43.708456 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 21:23:43.744363 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 21:23:43.755858 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:23:43.817162 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1130) Feb 13 21:23:43.845985 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:43.846002 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:43.863122 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:23:43.900508 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:23:43.900530 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:23:43.913568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:23:43.941490 ignition[1147]: INFO : Ignition 2.19.0 Feb 13 21:23:43.941490 ignition[1147]: INFO : Stage: files Feb 13 21:23:43.957405 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:43.957405 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:43.957405 ignition[1147]: DEBUG : files: compiled without relabeling support, skipping Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 21:23:43.957405 ignition[1147]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 21:23:43.945330 unknown[1147]: wrote ssh authorized keys file for user: core Feb 13 21:23:44.091317 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 21:23:44.150546 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 21:23:44.150546 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 21:23:44.782998 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 21:23:45.647413 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:45.647413 ignition[1147]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: files passed Feb 13 21:23:45.677426 ignition[1147]: INFO : POST message to Packet Timeline Feb 13 21:23:45.677426 ignition[1147]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:46.614080 ignition[1147]: INFO : GET result: OK Feb 13 21:23:47.000908 ignition[1147]: INFO : Ignition finished successfully Feb 13 21:23:47.003951 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 21:23:47.040332 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 21:23:47.050849 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 21:23:47.060666 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 21:23:47.060730 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 21:23:47.119513 initrd-setup-root-after-ignition[1188]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:47.119513 initrd-setup-root-after-ignition[1188]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:47.158324 initrd-setup-root-after-ignition[1192]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:47.124490 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:23:47.135552 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 21:23:47.184335 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 21:23:47.253836 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 21:23:47.254229 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 21:23:47.274383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 21:23:47.294415 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 21:23:47.315182 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 21:23:47.327246 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 21:23:47.392826 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:23:47.420479 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 21:23:47.448019 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:47.459762 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:47.480834 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 21:23:47.499776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 21:23:47.500220 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:23:47.526954 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 21:23:47.548787 systemd[1]: Stopped target basic.target - Basic System. Feb 13 21:23:47.567773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 21:23:47.585786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:23:47.607772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 21:23:47.629794 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 21:23:47.649791 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:23:47.670820 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 21:23:47.691804 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 21:23:47.711786 systemd[1]: Stopped target swap.target - Swaps. Feb 13 21:23:47.729668 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 21:23:47.730075 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:23:47.754888 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:47.774805 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:47.796661 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 21:23:47.797085 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:47.818621 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 21:23:47.819017 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 21:23:47.849779 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 21:23:47.850279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:23:47.870993 systemd[1]: Stopped target paths.target - Path Units. Feb 13 21:23:47.888628 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 21:23:47.889060 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:47.909782 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 21:23:47.928760 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 21:23:47.947750 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 21:23:47.948057 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:23:47.967795 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 21:23:47.968122 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:23:47.990890 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 21:23:47.991319 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:23:48.010851 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 21:23:48.011256 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 21:23:48.146331 ignition[1213]: INFO : Ignition 2.19.0 Feb 13 21:23:48.146331 ignition[1213]: INFO : Stage: umount Feb 13 21:23:48.146331 ignition[1213]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:48.146331 ignition[1213]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:48.146331 ignition[1213]: INFO : umount: umount passed Feb 13 21:23:48.146331 ignition[1213]: INFO : POST message to Packet Timeline Feb 13 21:23:48.146331 ignition[1213]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:48.028864 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 21:23:48.029289 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:23:48.058309 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 21:23:48.064770 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 21:23:48.080384 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 21:23:48.080532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:48.118555 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 21:23:48.118632 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:23:48.149037 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 21:23:48.149753 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 21:23:48.149840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 21:23:48.154939 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 21:23:48.155031 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 21:23:48.765415 ignition[1213]: INFO : GET result: OK Feb 13 21:23:49.151205 ignition[1213]: INFO : Ignition finished successfully Feb 13 21:23:49.154093 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 21:23:49.154534 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 21:23:49.171536 systemd[1]: Stopped target network.target - Network. Feb 13 21:23:49.187432 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 21:23:49.187610 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 21:23:49.206530 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 21:23:49.206668 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 21:23:49.224585 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 21:23:49.224744 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 21:23:49.242587 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 21:23:49.242758 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 21:23:49.260581 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 21:23:49.260756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 21:23:49.269193 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 21:23:49.278219 systemd-networkd[922]: enp1s0f0np0: DHCPv6 lease lost Feb 13 21:23:49.285332 systemd-networkd[922]: enp1s0f1np1: DHCPv6 lease lost Feb 13 21:23:49.296695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 21:23:49.315202 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 21:23:49.315482 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 21:23:49.334460 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 21:23:49.334818 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 21:23:49.354775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 21:23:49.354890 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:49.396242 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 21:23:49.412309 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 21:23:49.412559 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:23:49.432597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 21:23:49.432767 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:49.450596 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 21:23:49.450755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:49.470574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 21:23:49.470739 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:49.478961 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:49.511494 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 21:23:49.511967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:49.544644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 21:23:49.544682 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:49.565283 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 21:23:49.565310 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:49.587440 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 21:23:49.587526 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:23:49.626310 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 21:23:49.626602 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 21:23:49.666300 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:23:49.666576 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:49.712568 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 21:23:49.718562 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 21:23:49.718714 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:49.739405 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 21:23:49.739431 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:23:49.757327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 21:23:49.997329 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Feb 13 21:23:49.757352 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:49.787489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:23:49.787627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:49.810622 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 21:23:49.810966 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 21:23:49.831241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 21:23:49.831487 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 21:23:49.851341 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 21:23:49.887526 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 21:23:49.936277 systemd[1]: Switching root. Feb 13 21:23:50.093190 systemd-journald[266]: Journal stopped Feb 13 21:23:28.024496 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Feb 13 21:23:28.024511 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 21:23:28.024517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 21:23:28.024522 kernel: BIOS-provided physical RAM map: Feb 13 21:23:28.024526 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 21:23:28.024530 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 21:23:28.024535 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 21:23:28.024539 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 21:23:28.024543 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 21:23:28.024547 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b27fff] usable Feb 13 21:23:28.024551 kernel: BIOS-e820: [mem 0x0000000081b28000-0x0000000081b28fff] ACPI NVS Feb 13 21:23:28.024556 kernel: BIOS-e820: [mem 0x0000000081b29000-0x0000000081b29fff] reserved Feb 13 21:23:28.024560 kernel: BIOS-e820: [mem 0x0000000081b2a000-0x000000008afccfff] usable Feb 13 21:23:28.024564 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 21:23:28.024570 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 21:23:28.024574 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 21:23:28.024580 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 21:23:28.024585 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 21:23:28.024589 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 21:23:28.024594 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 21:23:28.024598 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 21:23:28.024603 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 21:23:28.024607 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 21:23:28.024612 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 21:23:28.024616 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 21:23:28.024621 kernel: NX (Execute Disable) protection: active Feb 13 21:23:28.024625 kernel: APIC: Static calls initialized Feb 13 21:23:28.024630 kernel: SMBIOS 3.2.1 present. Feb 13 21:23:28.024636 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 21:23:28.024640 kernel: tsc: Detected 3400.000 MHz processor Feb 13 21:23:28.024645 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 21:23:28.024649 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 21:23:28.024655 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 21:23:28.024659 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 21:23:28.024664 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 21:23:28.024669 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 21:23:28.024674 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 21:23:28.024679 kernel: Using GB pages for direct mapping Feb 13 21:23:28.024684 kernel: ACPI: Early table checksum verification disabled Feb 13 21:23:28.024689 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 21:23:28.024696 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 21:23:28.024701 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 21:23:28.024706 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 21:23:28.024711 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 21:23:28.024717 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 21:23:28.024722 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 21:23:28.024727 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 21:23:28.024732 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 21:23:28.024737 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 21:23:28.024742 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 21:23:28.024748 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 21:23:28.024754 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 21:23:28.024759 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024764 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 21:23:28.024769 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 21:23:28.024774 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024779 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024784 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 21:23:28.024789 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 21:23:28.024794 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024800 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:23:28.024805 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 21:23:28.024810 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 21:23:28.024815 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 21:23:28.024820 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 21:23:28.024825 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 21:23:28.024830 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 21:23:28.024835 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 21:23:28.024840 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 21:23:28.024845 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 21:23:28.024850 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 21:23:28.024855 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 21:23:28.024860 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 21:23:28.024865 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 21:23:28.024871 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 21:23:28.024875 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 21:23:28.024880 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 21:23:28.024886 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 21:23:28.024891 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 21:23:28.024896 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 21:23:28.024901 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 21:23:28.024906 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 21:23:28.024911 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 21:23:28.024916 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 21:23:28.024921 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 21:23:28.024926 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 21:23:28.024932 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 21:23:28.024937 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 21:23:28.024942 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 21:23:28.024947 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 21:23:28.024952 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 21:23:28.024956 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 21:23:28.024961 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 21:23:28.024966 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 21:23:28.024971 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 21:23:28.024977 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 21:23:28.024982 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 21:23:28.024987 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 21:23:28.024992 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 21:23:28.024997 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 21:23:28.025002 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 21:23:28.025007 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 21:23:28.025012 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 21:23:28.025017 kernel: No NUMA configuration found Feb 13 21:23:28.025022 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 21:23:28.025028 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 21:23:28.025033 kernel: Zone ranges: Feb 13 21:23:28.025038 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 21:23:28.025043 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 21:23:28.025048 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 21:23:28.025053 kernel: Movable zone start for each node Feb 13 21:23:28.025058 kernel: Early memory node ranges Feb 13 21:23:28.025063 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 21:23:28.025068 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 21:23:28.025073 kernel: node 0: [mem 0x0000000040400000-0x0000000081b27fff] Feb 13 21:23:28.025079 kernel: node 0: [mem 0x0000000081b2a000-0x000000008afccfff] Feb 13 21:23:28.025084 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 21:23:28.025089 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 21:23:28.025098 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 21:23:28.025106 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 21:23:28.025111 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 21:23:28.025117 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 21:23:28.025123 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 21:23:28.025129 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 21:23:28.025134 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 21:23:28.025139 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 21:23:28.025145 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 21:23:28.025150 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 21:23:28.025155 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 21:23:28.025161 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 21:23:28.025166 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 21:23:28.025173 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 21:23:28.025178 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 21:23:28.025183 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 21:23:28.025188 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 21:23:28.025194 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 21:23:28.025199 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 21:23:28.025205 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 21:23:28.025210 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 21:23:28.025215 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 21:23:28.025220 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 21:23:28.025227 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 21:23:28.025232 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 21:23:28.025237 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 21:23:28.025242 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 21:23:28.025248 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 21:23:28.025253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 21:23:28.025258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 21:23:28.025264 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 21:23:28.025269 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 21:23:28.025276 kernel: TSC deadline timer available Feb 13 21:23:28.025281 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 21:23:28.025286 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 21:23:28.025292 kernel: Booting paravirtualized kernel on bare hardware Feb 13 21:23:28.025297 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 21:23:28.025303 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 21:23:28.025308 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 21:23:28.025314 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 21:23:28.025319 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 21:23:28.025326 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 21:23:28.025332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 21:23:28.025337 kernel: random: crng init done Feb 13 21:23:28.025342 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 21:23:28.025348 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 21:23:28.025353 kernel: Fallback order for Node 0: 0 Feb 13 21:23:28.025358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 21:23:28.025364 kernel: Policy zone: Normal Feb 13 21:23:28.025370 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 21:23:28.025375 kernel: software IO TLB: area num 16. Feb 13 21:23:28.025381 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 732420K reserved, 0K cma-reserved) Feb 13 21:23:28.025387 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 21:23:28.025392 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 21:23:28.025398 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 21:23:28.025403 kernel: Dynamic Preempt: voluntary Feb 13 21:23:28.025408 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 21:23:28.025414 kernel: rcu: RCU event tracing is enabled. Feb 13 21:23:28.025420 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 21:23:28.025426 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 21:23:28.025431 kernel: Rude variant of Tasks RCU enabled. Feb 13 21:23:28.025437 kernel: Tracing variant of Tasks RCU enabled. Feb 13 21:23:28.025442 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 21:23:28.025447 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 21:23:28.025453 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 21:23:28.025458 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 21:23:28.025464 kernel: Console: colour dummy device 80x25 Feb 13 21:23:28.025470 kernel: printk: console [tty0] enabled Feb 13 21:23:28.025475 kernel: printk: console [ttyS1] enabled Feb 13 21:23:28.025480 kernel: ACPI: Core revision 20230628 Feb 13 21:23:28.025486 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 21:23:28.025491 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 21:23:28.025497 kernel: DMAR: Host address width 39 Feb 13 21:23:28.025502 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 21:23:28.025507 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 21:23:28.025513 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 21:23:28.025519 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 21:23:28.025524 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 21:23:28.025530 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 21:23:28.025535 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 21:23:28.025540 kernel: x2apic enabled Feb 13 21:23:28.025546 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 21:23:28.025551 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 21:23:28.025557 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 21:23:28.025562 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 21:23:28.025569 kernel: process: using mwait in idle threads Feb 13 21:23:28.025574 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 21:23:28.025579 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 21:23:28.025585 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 21:23:28.025590 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 21:23:28.025595 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 21:23:28.025600 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 21:23:28.025606 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 21:23:28.025611 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 21:23:28.025616 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 21:23:28.025622 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 21:23:28.025628 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 21:23:28.025633 kernel: TAA: Mitigation: TSX disabled Feb 13 21:23:28.025639 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 21:23:28.025644 kernel: SRBDS: Mitigation: Microcode Feb 13 21:23:28.025650 kernel: GDS: Mitigation: Microcode Feb 13 21:23:28.025655 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 21:23:28.025660 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 21:23:28.025666 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 21:23:28.025671 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 21:23:28.025676 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 21:23:28.025682 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 21:23:28.025688 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 21:23:28.025693 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 21:23:28.025699 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 21:23:28.025704 kernel: Freeing SMP alternatives memory: 32K Feb 13 21:23:28.025709 kernel: pid_max: default: 32768 minimum: 301 Feb 13 21:23:28.025714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 21:23:28.025720 kernel: landlock: Up and running. Feb 13 21:23:28.025725 kernel: SELinux: Initializing. Feb 13 21:23:28.025730 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.025736 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.025741 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 21:23:28.025747 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:28.025753 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:28.025759 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:28.025764 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 21:23:28.025769 kernel: ... version: 4 Feb 13 21:23:28.025775 kernel: ... bit width: 48 Feb 13 21:23:28.025780 kernel: ... generic registers: 4 Feb 13 21:23:28.025786 kernel: ... value mask: 0000ffffffffffff Feb 13 21:23:28.025791 kernel: ... max period: 00007fffffffffff Feb 13 21:23:28.025797 kernel: ... fixed-purpose events: 3 Feb 13 21:23:28.025803 kernel: ... event mask: 000000070000000f Feb 13 21:23:28.025808 kernel: signal: max sigframe size: 2032 Feb 13 21:23:28.025814 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 21:23:28.025819 kernel: rcu: Hierarchical SRCU implementation. Feb 13 21:23:28.025824 kernel: rcu: Max phase no-delay instances is 400. Feb 13 21:23:28.025830 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 21:23:28.025835 kernel: smp: Bringing up secondary CPUs ... Feb 13 21:23:28.025840 kernel: smpboot: x86: Booting SMP configuration: Feb 13 21:23:28.025847 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 21:23:28.025852 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 21:23:28.025858 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 21:23:28.025863 kernel: smpboot: Max logical packages: 1 Feb 13 21:23:28.025869 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 21:23:28.025874 kernel: devtmpfs: initialized Feb 13 21:23:28.025879 kernel: x86/mm: Memory block size: 128MB Feb 13 21:23:28.025885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b28000-0x81b28fff] (4096 bytes) Feb 13 21:23:28.025890 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 21:23:28.025896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 21:23:28.025902 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 21:23:28.025907 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 21:23:28.025913 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 21:23:28.025918 kernel: audit: initializing netlink subsys (disabled) Feb 13 21:23:28.025923 kernel: audit: type=2000 audit(1739481802.039:1): state=initialized audit_enabled=0 res=1 Feb 13 21:23:28.025928 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 21:23:28.025934 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 21:23:28.025939 kernel: cpuidle: using governor menu Feb 13 21:23:28.025945 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 21:23:28.025951 kernel: dca service started, version 1.12.1 Feb 13 21:23:28.025956 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 21:23:28.025961 kernel: PCI: Using configuration type 1 for base access Feb 13 21:23:28.025967 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 21:23:28.025972 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 21:23:28.025978 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 21:23:28.025983 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 21:23:28.025989 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 21:23:28.025995 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 21:23:28.026000 kernel: ACPI: Added _OSI(Module Device) Feb 13 21:23:28.026005 kernel: ACPI: Added _OSI(Processor Device) Feb 13 21:23:28.026010 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 21:23:28.026016 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 21:23:28.026021 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 21:23:28.026026 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026032 kernel: ACPI: SSDT 0xFFFF8FA900FAF400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 21:23:28.026037 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026043 kernel: ACPI: SSDT 0xFFFF8FA900F9C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 21:23:28.026049 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026054 kernel: ACPI: SSDT 0xFFFF8FA900F86A00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 21:23:28.026060 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026065 kernel: ACPI: SSDT 0xFFFF8FA900F9F000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 21:23:28.026070 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026075 kernel: ACPI: SSDT 0xFFFF8FA900FA1000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 21:23:28.026081 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:23:28.026086 kernel: ACPI: SSDT 0xFFFF8FA900FADC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 21:23:28.026092 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 21:23:28.026098 kernel: ACPI: Interpreter enabled Feb 13 21:23:28.026122 kernel: ACPI: PM: (supports S0 S5) Feb 13 21:23:28.026128 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 21:23:28.026147 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 21:23:28.026152 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 21:23:28.026157 kernel: HEST: Table parsing has been initialized. Feb 13 21:23:28.026163 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 21:23:28.026168 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 21:23:28.026174 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 21:23:28.026180 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 21:23:28.026185 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 21:23:28.026191 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 21:23:28.026196 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 21:23:28.026201 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 21:23:28.026207 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 21:23:28.026212 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 21:23:28.026218 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 21:23:28.026224 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 21:23:28.026229 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 21:23:28.026234 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 21:23:28.026240 kernel: ACPI: \PIN_: New power resource Feb 13 21:23:28.026245 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 21:23:28.026319 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 21:23:28.026370 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 21:23:28.026417 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 21:23:28.026426 kernel: PCI host bridge to bus 0000:00 Feb 13 21:23:28.026473 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 21:23:28.026515 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 21:23:28.026556 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 21:23:28.026597 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 21:23:28.026637 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 21:23:28.026678 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 21:23:28.026735 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 21:23:28.026791 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 21:23:28.026839 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.026890 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 21:23:28.026936 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 21:23:28.026986 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 21:23:28.027034 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 21:23:28.027086 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 21:23:28.027135 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 21:23:28.027183 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 21:23:28.027233 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 21:23:28.027280 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 21:23:28.027329 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 21:23:28.027379 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 21:23:28.027426 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:23:28.027478 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 21:23:28.027525 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:23:28.027575 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 21:23:28.027624 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 21:23:28.027672 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 21:23:28.027728 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 21:23:28.027775 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 21:23:28.027822 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 21:23:28.027871 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 21:23:28.027917 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 21:23:28.027966 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 21:23:28.028015 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 21:23:28.028064 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 21:23:28.028112 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 21:23:28.028197 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 21:23:28.028242 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 21:23:28.028289 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 21:23:28.028337 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 21:23:28.028383 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 21:23:28.028434 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 21:23:28.028483 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028538 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 21:23:28.028585 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028634 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 21:23:28.028682 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028732 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 21:23:28.028781 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028832 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 21:23:28.028879 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.028929 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 21:23:28.028975 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:23:28.029025 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 21:23:28.029076 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 21:23:28.029161 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 21:23:28.029209 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 21:23:28.029262 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 21:23:28.029308 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 21:23:28.029360 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 21:23:28.029407 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 21:23:28.029458 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 21:23:28.029506 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 21:23:28.029553 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:23:28.029601 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:23:28.029653 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 21:23:28.029702 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 21:23:28.029750 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 21:23:28.029798 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 21:23:28.029846 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:23:28.029893 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:23:28.029941 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:23:28.029987 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 21:23:28.030034 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:23:28.030080 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 21:23:28.030137 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 21:23:28.030189 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:23:28.030236 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 21:23:28.030284 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 21:23:28.030330 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 21:23:28.030378 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.030425 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 21:23:28.030473 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:23:28.030520 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 21:23:28.030573 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 21:23:28.030622 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:23:28.030669 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 21:23:28.030718 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 21:23:28.030765 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 21:23:28.030813 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:23:28.030863 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 21:23:28.030910 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:23:28.030957 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 21:23:28.031004 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 21:23:28.031057 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 21:23:28.031108 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 21:23:28.031158 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 21:23:28.031206 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:23:28.031257 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 21:23:28.031303 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.031351 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.031407 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 21:23:28.031461 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 21:23:28.031512 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 21:23:28.031562 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 21:23:28.031614 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 21:23:28.031663 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 21:23:28.031712 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 21:23:28.031763 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:23:28.031810 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 21:23:28.031859 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.031906 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.031915 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 21:23:28.031921 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 21:23:28.031927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 21:23:28.031933 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 21:23:28.031939 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 21:23:28.031944 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 21:23:28.031950 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 21:23:28.031956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 21:23:28.031961 kernel: iommu: Default domain type: Translated Feb 13 21:23:28.031968 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 21:23:28.031974 kernel: PCI: Using ACPI for IRQ routing Feb 13 21:23:28.031979 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 21:23:28.031985 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 21:23:28.031991 kernel: e820: reserve RAM buffer [mem 0x81b28000-0x83ffffff] Feb 13 21:23:28.031996 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 21:23:28.032002 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 21:23:28.032007 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 21:23:28.032013 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 21:23:28.032063 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 21:23:28.032132 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 21:23:28.032197 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 21:23:28.032205 kernel: vgaarb: loaded Feb 13 21:23:28.032211 kernel: clocksource: Switched to clocksource tsc-early Feb 13 21:23:28.032217 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 21:23:28.032222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 21:23:28.032228 kernel: pnp: PnP ACPI init Feb 13 21:23:28.032276 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 21:23:28.032324 kernel: pnp 00:02: [dma 0 disabled] Feb 13 21:23:28.032370 kernel: pnp 00:03: [dma 0 disabled] Feb 13 21:23:28.032418 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 21:23:28.032462 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 21:23:28.032507 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 21:23:28.032553 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 21:23:28.032597 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 21:23:28.032640 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 21:23:28.032681 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 21:23:28.032728 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 21:23:28.032771 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 21:23:28.032814 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 21:23:28.032856 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 21:23:28.032905 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 21:23:28.032947 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 21:23:28.032990 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 21:23:28.033032 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 21:23:28.033073 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 21:23:28.033135 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 21:23:28.033193 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 21:23:28.033239 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 21:23:28.033247 kernel: pnp: PnP ACPI: found 10 devices Feb 13 21:23:28.033253 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 21:23:28.033259 kernel: NET: Registered PF_INET protocol family Feb 13 21:23:28.033265 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:23:28.033271 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 21:23:28.033277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 21:23:28.033283 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:23:28.033290 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 21:23:28.033296 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 21:23:28.033302 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.033308 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:23:28.033313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 21:23:28.033319 kernel: NET: Registered PF_XDP protocol family Feb 13 21:23:28.033366 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 21:23:28.033413 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 21:23:28.033462 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 21:23:28.033511 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033558 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033607 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033654 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:23:28.033700 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:23:28.033747 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 21:23:28.033793 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:23:28.033842 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 21:23:28.033888 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 21:23:28.033935 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:23:28.033981 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 21:23:28.034028 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 21:23:28.034076 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:23:28.034159 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 21:23:28.034205 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 21:23:28.034253 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 21:23:28.034300 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.034348 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.034395 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 21:23:28.034440 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 21:23:28.034487 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 21:23:28.034532 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 21:23:28.034574 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 21:23:28.034615 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 21:23:28.034656 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 21:23:28.034697 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 21:23:28.034738 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 21:23:28.034783 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 21:23:28.034829 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:23:28.034879 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 21:23:28.034922 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 21:23:28.034969 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 21:23:28.035013 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 21:23:28.035059 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 21:23:28.035107 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 21:23:28.035188 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 21:23:28.035234 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 21:23:28.035242 kernel: PCI: CLS 64 bytes, default 64 Feb 13 21:23:28.035248 kernel: DMAR: No ATSR found Feb 13 21:23:28.035253 kernel: DMAR: No SATC found Feb 13 21:23:28.035259 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 21:23:28.035306 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 21:23:28.035355 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 21:23:28.035402 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 21:23:28.035449 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 21:23:28.035495 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 21:23:28.035541 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 21:23:28.035588 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 21:23:28.035633 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 21:23:28.035680 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 21:23:28.035726 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 21:23:28.035775 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 21:23:28.035820 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 21:23:28.035867 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 21:23:28.035913 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 21:23:28.035960 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 21:23:28.036007 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 21:23:28.036053 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 21:23:28.036101 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 21:23:28.036182 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 21:23:28.036229 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 21:23:28.036274 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 21:23:28.036323 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 21:23:28.036370 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 21:23:28.036418 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 21:23:28.036465 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 21:23:28.036513 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 21:23:28.036564 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 21:23:28.036572 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 21:23:28.036578 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 21:23:28.036585 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 21:23:28.036590 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 21:23:28.036596 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 21:23:28.036602 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 21:23:28.036608 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 21:23:28.036656 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 21:23:28.036666 kernel: Initialise system trusted keyrings Feb 13 21:23:28.036671 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 21:23:28.036677 kernel: Key type asymmetric registered Feb 13 21:23:28.036683 kernel: Asymmetric key parser 'x509' registered Feb 13 21:23:28.036688 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 21:23:28.036694 kernel: io scheduler mq-deadline registered Feb 13 21:23:28.036700 kernel: io scheduler kyber registered Feb 13 21:23:28.036705 kernel: io scheduler bfq registered Feb 13 21:23:28.036753 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 21:23:28.036799 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 21:23:28.036846 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 21:23:28.036891 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 21:23:28.036938 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 21:23:28.036984 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 21:23:28.037037 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 21:23:28.037047 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 21:23:28.037053 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 21:23:28.037059 kernel: pstore: Using crash dump compression: deflate Feb 13 21:23:28.037065 kernel: pstore: Registered erst as persistent store backend Feb 13 21:23:28.037070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 21:23:28.037076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 21:23:28.037082 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 21:23:28.037088 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 21:23:28.037093 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 21:23:28.037184 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 21:23:28.037193 kernel: i8042: PNP: No PS/2 controller found. Feb 13 21:23:28.037236 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 21:23:28.037279 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 21:23:28.037322 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T21:23:26 UTC (1739481806) Feb 13 21:23:28.037366 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 21:23:28.037374 kernel: intel_pstate: Intel P-state driver initializing Feb 13 21:23:28.037380 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 21:23:28.037387 kernel: intel_pstate: HWP enabled Feb 13 21:23:28.037393 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 21:23:28.037399 kernel: vesafb: scrolling: redraw Feb 13 21:23:28.037404 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 21:23:28.037410 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000031ebe921, using 768k, total 768k Feb 13 21:23:28.037416 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 21:23:28.037422 kernel: fb0: VESA VGA frame buffer device Feb 13 21:23:28.037427 kernel: NET: Registered PF_INET6 protocol family Feb 13 21:23:28.037433 kernel: Segment Routing with IPv6 Feb 13 21:23:28.037440 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 21:23:28.037445 kernel: NET: Registered PF_PACKET protocol family Feb 13 21:23:28.037451 kernel: Key type dns_resolver registered Feb 13 21:23:28.037457 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 21:23:28.037462 kernel: IPI shorthand broadcast: enabled Feb 13 21:23:28.037468 kernel: sched_clock: Marking stable (2476125858, 1385628491)->(4406112287, -544357938) Feb 13 21:23:28.037474 kernel: registered taskstats version 1 Feb 13 21:23:28.037480 kernel: Loading compiled-in X.509 certificates Feb 13 21:23:28.037485 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 21:23:28.037492 kernel: Key type .fscrypt registered Feb 13 21:23:28.037497 kernel: Key type fscrypt-provisioning registered Feb 13 21:23:28.037503 kernel: ima: Allocated hash algorithm: sha1 Feb 13 21:23:28.037509 kernel: ima: No architecture policies found Feb 13 21:23:28.037514 kernel: clk: Disabling unused clocks Feb 13 21:23:28.037520 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 21:23:28.037526 kernel: Write protecting the kernel read-only data: 36864k Feb 13 21:23:28.037531 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 21:23:28.037538 kernel: Run /init as init process Feb 13 21:23:28.037544 kernel: with arguments: Feb 13 21:23:28.037549 kernel: /init Feb 13 21:23:28.037555 kernel: with environment: Feb 13 21:23:28.037560 kernel: HOME=/ Feb 13 21:23:28.037566 kernel: TERM=linux Feb 13 21:23:28.037572 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 21:23:28.037579 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:23:28.037587 systemd[1]: Detected architecture x86-64. Feb 13 21:23:28.037593 systemd[1]: Running in initrd. Feb 13 21:23:28.037599 systemd[1]: No hostname configured, using default hostname. Feb 13 21:23:28.037605 systemd[1]: Hostname set to . Feb 13 21:23:28.037611 systemd[1]: Initializing machine ID from random generator. Feb 13 21:23:28.037617 systemd[1]: Queued start job for default target initrd.target. Feb 13 21:23:28.037623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:28.037629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:28.037636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 21:23:28.037642 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:23:28.037648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 21:23:28.037655 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 21:23:28.037661 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 21:23:28.037668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 21:23:28.037673 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 21:23:28.037680 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 21:23:28.037686 kernel: clocksource: Switched to clocksource tsc Feb 13 21:23:28.037692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:28.037698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:28.037704 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:23:28.037710 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:23:28.037716 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:23:28.037722 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:23:28.037728 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:23:28.037735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:23:28.037741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 21:23:28.037747 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 21:23:28.037753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:28.037759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:28.037765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:28.037771 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:23:28.037777 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 21:23:28.037784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:23:28.037790 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 21:23:28.037796 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 21:23:28.037802 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:23:28.037817 systemd-journald[266]: Collecting audit messages is disabled. Feb 13 21:23:28.037833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:23:28.037839 systemd-journald[266]: Journal started Feb 13 21:23:28.037853 systemd-journald[266]: Runtime Journal (/run/log/journal/3e237711bf0b4acc85618f353d3ac97a) is 8.0M, max 639.9M, 631.9M free. Feb 13 21:23:28.061192 systemd-modules-load[268]: Inserted module 'overlay' Feb 13 21:23:28.083115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:28.111714 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 21:23:28.183340 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:23:28.183356 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 21:23:28.183366 kernel: Bridge firewalling registered Feb 13 21:23:28.168287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:28.173301 systemd-modules-load[268]: Inserted module 'br_netfilter' Feb 13 21:23:28.195413 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 21:23:28.215449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:28.233464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:28.267441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:28.273214 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:23:28.305567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:23:28.312902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:23:28.316434 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:23:28.317404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:23:28.317763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:28.321765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:28.323352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:23:28.326033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:28.326518 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:28.356831 systemd-resolved[303]: Positive Trust Anchors: Feb 13 21:23:28.356840 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:23:28.412247 dracut-cmdline[307]: dracut-dracut-053 Feb 13 21:23:28.412247 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 21:23:28.356878 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:23:28.537184 kernel: SCSI subsystem initialized Feb 13 21:23:28.537199 kernel: Loading iSCSI transport class v2.0-870. Feb 13 21:23:28.359334 systemd-resolved[303]: Defaulting to hostname 'linux'. Feb 13 21:23:28.370373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 21:23:28.569195 kernel: iscsi: registered transport (tcp) Feb 13 21:23:28.381355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:23:28.400436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:28.612228 kernel: iscsi: registered transport (qla4xxx) Feb 13 21:23:28.612243 kernel: QLogic iSCSI HBA Driver Feb 13 21:23:28.613338 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 21:23:28.634410 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 21:23:28.690961 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 21:23:28.690979 kernel: device-mapper: uevent: version 1.0.3 Feb 13 21:23:28.722147 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 21:23:28.768158 kernel: raid6: avx2x4 gen() 53276 MB/s Feb 13 21:23:28.800160 kernel: raid6: avx2x2 gen() 55092 MB/s Feb 13 21:23:28.836536 kernel: raid6: avx2x1 gen() 46226 MB/s Feb 13 21:23:28.836554 kernel: raid6: using algorithm avx2x2 gen() 55092 MB/s Feb 13 21:23:28.883610 kernel: raid6: .... xor() 32015 MB/s, rmw enabled Feb 13 21:23:28.883631 kernel: raid6: using avx2x2 recovery algorithm Feb 13 21:23:28.925138 kernel: xor: automatically using best checksumming function avx Feb 13 21:23:29.040117 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 21:23:29.046045 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:23:29.062415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:29.072435 systemd-udevd[494]: Using default interface naming scheme 'v255'. Feb 13 21:23:29.085448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:29.111304 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 21:23:29.153664 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Feb 13 21:23:29.170386 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:23:29.198468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:23:29.256967 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:29.289613 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 21:23:29.289629 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 21:23:29.300138 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 21:23:29.315281 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 21:23:29.392235 kernel: ACPI: bus type USB registered Feb 13 21:23:29.392256 kernel: usbcore: registered new interface driver usbfs Feb 13 21:23:29.392264 kernel: usbcore: registered new interface driver hub Feb 13 21:23:29.392272 kernel: usbcore: registered new device driver usb Feb 13 21:23:29.392279 kernel: PTP clock support registered Feb 13 21:23:29.392286 kernel: libata version 3.00 loaded. Feb 13 21:23:29.392294 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 21:23:29.329304 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 21:23:29.466206 kernel: AES CTR mode by8 optimization enabled Feb 13 21:23:29.466231 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:23:29.576077 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 21:23:29.576162 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 21:23:29.734423 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 21:23:29.734497 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 21:23:29.734563 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:23:29.734624 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 21:23:29.734686 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 21:23:29.734751 kernel: scsi host0: ahci Feb 13 21:23:29.734814 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 21:23:29.734875 kernel: scsi host1: ahci Feb 13 21:23:29.734933 kernel: hub 1-0:1.0: USB hub found Feb 13 21:23:29.735004 kernel: scsi host2: ahci Feb 13 21:23:29.735062 kernel: hub 1-0:1.0: 16 ports detected Feb 13 21:23:29.735136 kernel: scsi host3: ahci Feb 13 21:23:29.735201 kernel: hub 2-0:1.0: USB hub found Feb 13 21:23:29.735269 kernel: scsi host4: ahci Feb 13 21:23:29.735327 kernel: hub 2-0:1.0: 10 ports detected Feb 13 21:23:29.735391 kernel: scsi host5: ahci Feb 13 21:23:29.735449 kernel: scsi host6: ahci Feb 13 21:23:29.735507 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 21:23:29.735516 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 21:23:29.735523 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 21:23:29.735531 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 21:23:29.735538 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 21:23:29.735545 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 21:23:29.735553 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 21:23:29.444346 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:23:29.917941 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 21:23:29.917963 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 21:23:29.917972 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 21:23:30.380403 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 21:23:30.380417 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:23:30.380495 kernel: pps pps0: new PPS source ptp0 Feb 13 21:23:30.380562 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 21:23:30.380628 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:23:30.380690 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e5:56 Feb 13 21:23:30.380750 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 21:23:30.380810 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:23:30.380869 kernel: hub 1-14:1.0: USB hub found Feb 13 21:23:30.380952 kernel: hub 1-14:1.0: 4 ports detected Feb 13 21:23:30.381019 kernel: pps pps1: new PPS source ptp1 Feb 13 21:23:30.381076 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 21:23:30.381189 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:23:30.381197 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:23:30.381258 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381267 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e5:57 Feb 13 21:23:30.381326 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:23:30.381336 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 21:23:30.381395 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381403 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:23:30.381461 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381469 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:23:30.381528 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381536 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 21:23:30.381594 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:23:30.381604 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:30.381611 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:23:30.381618 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:23:30.381626 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:23:30.381633 kernel: ata2.00: Features: NCQ-prio Feb 13 21:23:30.381640 kernel: ata1.00: Features: NCQ-prio Feb 13 21:23:30.381647 kernel: ata2.00: configured for UDMA/133 Feb 13 21:23:30.381654 kernel: ata1.00: configured for UDMA/133 Feb 13 21:23:30.381661 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:23:30.381723 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:23:30.381740 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 21:23:30.381752 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 21:23:30.946654 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:23:31.135297 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:23:31.135384 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 21:23:31.135479 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 21:23:31.135496 kernel: usbcore: registered new interface driver usbhid Feb 13 21:23:31.135510 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 21:23:31.135617 kernel: usbhid: USB HID core driver Feb 13 21:23:31.135632 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 21:23:31.135645 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.135654 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:23:31.135734 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:23:31.135749 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 21:23:31.135852 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:23:31.135929 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 21:23:31.135994 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 21:23:31.136056 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 21:23:31.136123 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 21:23:31.136184 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 21:23:31.136255 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 21:23:31.136266 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 21:23:31.136338 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:23:31.136401 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 21:23:31.136465 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 21:23:31.136530 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:23:31.136594 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.136604 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:23:31.136672 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 21:23:31.136735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 21:23:31.136744 kernel: GPT:9289727 != 937703087 Feb 13 21:23:31.136751 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 21:23:31.136758 kernel: GPT:9289727 != 937703087 Feb 13 21:23:31.136765 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 21:23:31.136772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:31.136780 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 21:23:31.136842 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 21:23:31.136906 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:23:31.136913 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:23:31.136976 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 21:23:31.137035 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 21:23:29.496481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:31.178227 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (563) Feb 13 21:23:29.725449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:23:31.256350 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 21:23:31.256524 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (577) Feb 13 21:23:29.901315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 21:23:30.004359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:23:30.004388 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:30.264440 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:30.282166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:23:30.282195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:31.406200 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.406215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:30.301193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:31.427219 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:30.340367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:31.447251 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:30.356040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:23:31.466184 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:31.466195 disk-uuid[721]: Primary Header is updated. Feb 13 21:23:31.466195 disk-uuid[721]: Secondary Entries is updated. Feb 13 21:23:31.466195 disk-uuid[721]: Secondary Header is updated. Feb 13 21:23:31.508194 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:30.487660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:30.615267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:31.158410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:31.238487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 21:23:31.271908 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 21:23:31.292861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:23:31.310319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:23:31.314072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 21:23:31.343314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 21:23:32.470219 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:23:32.491086 disk-uuid[722]: The operation has completed successfully. Feb 13 21:23:32.500193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:23:32.525195 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 21:23:32.525272 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 21:23:32.558428 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 21:23:32.597224 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 21:23:32.597289 sh[739]: Success Feb 13 21:23:32.631835 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 21:23:32.658573 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 21:23:32.660083 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 21:23:32.730117 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 21:23:32.730156 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:32.759267 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 21:23:32.778722 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 21:23:32.796991 kernel: BTRFS info (device dm-0): using free space tree Feb 13 21:23:32.834105 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 21:23:32.835669 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 21:23:32.844582 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 21:23:32.856564 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 21:23:32.989704 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:32.989723 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:32.989730 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:23:32.989737 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:23:32.989745 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:23:32.989751 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:32.990226 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 21:23:33.001660 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 21:23:33.032516 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 21:23:33.043506 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:23:33.080227 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:23:33.091173 systemd-networkd[922]: lo: Link UP Feb 13 21:23:33.091175 systemd-networkd[922]: lo: Gained carrier Feb 13 21:23:33.108273 ignition[896]: Ignition 2.19.0 Feb 13 21:23:33.093509 systemd-networkd[922]: Enumeration completed Feb 13 21:23:33.108277 ignition[896]: Stage: fetch-offline Feb 13 21:23:33.093561 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:23:33.108300 ignition[896]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:33.094214 systemd-networkd[922]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.108305 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:33.110244 systemd[1]: Reached target network.target - Network. Feb 13 21:23:33.108361 ignition[896]: parsed url from cmdline: "" Feb 13 21:23:33.110351 unknown[896]: fetched base config from "system" Feb 13 21:23:33.108363 ignition[896]: no config URL provided Feb 13 21:23:33.110355 unknown[896]: fetched user config from "system" Feb 13 21:23:33.108366 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:23:33.120975 systemd-networkd[922]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.108388 ignition[896]: parsing config with SHA512: b1549dea9d99ca6212b8264b9694d625c49735e8dd1a1365c5db1b82aabd09bb5f816f712ccaa0a43d23ea1461f5d4efe324f777d6a70efb97b7b19d5a18c5bf Feb 13 21:23:33.131441 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:23:33.110567 ignition[896]: fetch-offline: fetch-offline passed Feb 13 21:23:33.149210 systemd-networkd[922]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.110569 ignition[896]: POST message to Packet Timeline Feb 13 21:23:33.155274 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 21:23:33.110572 ignition[896]: POST Status error: resource requires networking Feb 13 21:23:33.170500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 21:23:33.110606 ignition[896]: Ignition finished successfully Feb 13 21:23:33.358308 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 21:23:33.352527 systemd-networkd[922]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:33.208631 ignition[934]: Ignition 2.19.0 Feb 13 21:23:33.208641 ignition[934]: Stage: kargs Feb 13 21:23:33.208903 ignition[934]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:33.208920 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:33.210409 ignition[934]: kargs: kargs passed Feb 13 21:23:33.210415 ignition[934]: POST message to Packet Timeline Feb 13 21:23:33.210436 ignition[934]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:33.211482 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36741->[::1]:53: read: connection refused Feb 13 21:23:33.411563 ignition[934]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 21:23:33.412568 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36518->[::1]:53: read: connection refused Feb 13 21:23:33.593195 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 21:23:33.594084 systemd-networkd[922]: eno1: Link UP Feb 13 21:23:33.594260 systemd-networkd[922]: eno2: Link UP Feb 13 21:23:33.594387 systemd-networkd[922]: enp1s0f0np0: Link UP Feb 13 21:23:33.594540 systemd-networkd[922]: enp1s0f0np0: Gained carrier Feb 13 21:23:33.603242 systemd-networkd[922]: enp1s0f1np1: Link UP Feb 13 21:23:33.638266 systemd-networkd[922]: enp1s0f0np0: DHCPv4 address 147.28.180.221/31, gateway 147.28.180.220 acquired from 145.40.83.140 Feb 13 21:23:33.812948 ignition[934]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 21:23:33.814038 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55429->[::1]:53: read: connection refused Feb 13 21:23:34.369864 systemd-networkd[922]: enp1s0f1np1: Gained carrier Feb 13 21:23:34.614560 ignition[934]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 21:23:34.615743 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39983->[::1]:53: read: connection refused Feb 13 21:23:34.945723 systemd-networkd[922]: enp1s0f0np0: Gained IPv6LL Feb 13 21:23:36.097702 systemd-networkd[922]: enp1s0f1np1: Gained IPv6LL Feb 13 21:23:36.217211 ignition[934]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 21:23:36.218642 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37711->[::1]:53: read: connection refused Feb 13 21:23:39.421834 ignition[934]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 21:23:40.102470 ignition[934]: GET result: OK Feb 13 21:23:41.040287 ignition[934]: Ignition finished successfully Feb 13 21:23:41.045436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 21:23:41.069379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 21:23:41.075799 ignition[952]: Ignition 2.19.0 Feb 13 21:23:41.075803 ignition[952]: Stage: disks Feb 13 21:23:41.075902 ignition[952]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:41.075909 ignition[952]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:41.076441 ignition[952]: disks: disks passed Feb 13 21:23:41.076444 ignition[952]: POST message to Packet Timeline Feb 13 21:23:41.076453 ignition[952]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:41.618023 ignition[952]: GET result: OK Feb 13 21:23:41.992645 ignition[952]: Ignition finished successfully Feb 13 21:23:41.995185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 21:23:42.011454 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 21:23:42.030388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 21:23:42.051374 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:23:42.072404 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:23:42.092402 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:23:42.120336 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 21:23:42.153518 systemd-fsck[969]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 21:23:42.165149 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 21:23:42.177403 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 21:23:42.293982 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 21:23:42.308356 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 21:23:42.294321 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 21:23:42.331331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:23:42.340061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 21:23:42.458394 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (978) Feb 13 21:23:42.458407 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:42.458439 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:42.458456 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:23:42.458470 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:23:42.458485 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:23:42.378772 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 21:23:42.482474 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 21:23:42.495153 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 21:23:42.536270 coreos-metadata[980]: Feb 13 21:23:42.530 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:23:42.495174 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:23:42.577306 coreos-metadata[996]: Feb 13 21:23:42.530 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:23:42.519084 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:23:42.545375 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 21:23:42.575351 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 21:23:42.627200 initrd-setup-root[1011]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 21:23:42.637236 initrd-setup-root[1018]: cut: /sysroot/etc/group: No such file or directory Feb 13 21:23:42.647208 initrd-setup-root[1025]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 21:23:42.657204 initrd-setup-root[1032]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 21:23:42.660946 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 21:23:42.685316 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 21:23:42.687377 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 21:23:42.733375 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:42.724865 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 21:23:42.747304 ignition[1103]: INFO : Ignition 2.19.0 Feb 13 21:23:42.747304 ignition[1103]: INFO : Stage: mount Feb 13 21:23:42.754276 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:42.754276 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:42.754276 ignition[1103]: INFO : mount: mount passed Feb 13 21:23:42.754276 ignition[1103]: INFO : POST message to Packet Timeline Feb 13 21:23:42.754276 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:42.749866 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 21:23:43.134057 coreos-metadata[980]: Feb 13 21:23:43.133 INFO Fetch successful Feb 13 21:23:43.166214 coreos-metadata[980]: Feb 13 21:23:43.166 INFO wrote hostname ci-4081.3.1-a-e8b80a8c0e to /sysroot/etc/hostname Feb 13 21:23:43.167390 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:23:43.211760 coreos-metadata[996]: Feb 13 21:23:43.211 INFO Fetch successful Feb 13 21:23:43.282785 ignition[1103]: INFO : GET result: OK Feb 13 21:23:43.285425 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 21:23:43.285498 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 21:23:43.705306 ignition[1103]: INFO : Ignition finished successfully Feb 13 21:23:43.708456 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 21:23:43.744363 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 21:23:43.755858 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:23:43.817162 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1130) Feb 13 21:23:43.845985 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 21:23:43.846002 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:43.863122 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:23:43.900508 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:23:43.900530 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:23:43.913568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:23:43.941490 ignition[1147]: INFO : Ignition 2.19.0 Feb 13 21:23:43.941490 ignition[1147]: INFO : Stage: files Feb 13 21:23:43.957405 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:43.957405 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:43.957405 ignition[1147]: DEBUG : files: compiled without relabeling support, skipping Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 21:23:43.957405 ignition[1147]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 21:23:43.957405 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 21:23:43.945330 unknown[1147]: wrote ssh authorized keys file for user: core Feb 13 21:23:44.091317 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 21:23:44.150546 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 21:23:44.150546 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:44.183300 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 21:23:44.782998 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 21:23:45.647413 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:45.647413 ignition[1147]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:23:45.677426 ignition[1147]: INFO : files: files passed Feb 13 21:23:45.677426 ignition[1147]: INFO : POST message to Packet Timeline Feb 13 21:23:45.677426 ignition[1147]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:46.614080 ignition[1147]: INFO : GET result: OK Feb 13 21:23:47.000908 ignition[1147]: INFO : Ignition finished successfully Feb 13 21:23:47.003951 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 21:23:47.040332 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 21:23:47.050849 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 21:23:47.060666 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 21:23:47.060730 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 21:23:47.119513 initrd-setup-root-after-ignition[1188]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:47.119513 initrd-setup-root-after-ignition[1188]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:47.158324 initrd-setup-root-after-ignition[1192]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:47.124490 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:23:47.135552 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 21:23:47.184335 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 21:23:47.253836 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 21:23:47.254229 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 21:23:47.274383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 21:23:47.294415 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 21:23:47.315182 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 21:23:47.327246 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 21:23:47.392826 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:23:47.420479 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 21:23:47.448019 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:47.459762 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:47.480834 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 21:23:47.499776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 21:23:47.500220 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:23:47.526954 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 21:23:47.548787 systemd[1]: Stopped target basic.target - Basic System. Feb 13 21:23:47.567773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 21:23:47.585786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:23:47.607772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 21:23:47.629794 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 21:23:47.649791 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:23:47.670820 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 21:23:47.691804 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 21:23:47.711786 systemd[1]: Stopped target swap.target - Swaps. Feb 13 21:23:47.729668 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 21:23:47.730075 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:23:47.754888 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:47.774805 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:47.796661 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 21:23:47.797085 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:47.818621 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 21:23:47.819017 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 21:23:47.849779 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 21:23:47.850279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:23:47.870993 systemd[1]: Stopped target paths.target - Path Units. Feb 13 21:23:47.888628 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 21:23:47.889060 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:47.909782 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 21:23:47.928760 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 21:23:47.947750 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 21:23:47.948057 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:23:47.967795 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 21:23:47.968122 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:23:47.990890 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 21:23:47.991319 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:23:48.010851 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 21:23:48.011256 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 21:23:48.146331 ignition[1213]: INFO : Ignition 2.19.0 Feb 13 21:23:48.146331 ignition[1213]: INFO : Stage: umount Feb 13 21:23:48.146331 ignition[1213]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:48.146331 ignition[1213]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:23:48.146331 ignition[1213]: INFO : umount: umount passed Feb 13 21:23:48.146331 ignition[1213]: INFO : POST message to Packet Timeline Feb 13 21:23:48.146331 ignition[1213]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:23:48.028864 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 21:23:48.029289 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:23:48.058309 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 21:23:48.064770 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 21:23:48.080384 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 21:23:48.080532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:48.118555 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 21:23:48.118632 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:23:48.149037 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 21:23:48.149753 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 21:23:48.149840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 21:23:48.154939 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 21:23:48.155031 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 21:23:48.765415 ignition[1213]: INFO : GET result: OK Feb 13 21:23:49.151205 ignition[1213]: INFO : Ignition finished successfully Feb 13 21:23:49.154093 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 21:23:49.154534 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 21:23:49.171536 systemd[1]: Stopped target network.target - Network. Feb 13 21:23:49.187432 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 21:23:49.187610 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 21:23:49.206530 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 21:23:49.206668 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 21:23:49.224585 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 21:23:49.224744 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 21:23:49.242587 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 21:23:49.242758 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 21:23:49.260581 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 21:23:49.260756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 21:23:49.269193 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 21:23:49.278219 systemd-networkd[922]: enp1s0f0np0: DHCPv6 lease lost Feb 13 21:23:49.285332 systemd-networkd[922]: enp1s0f1np1: DHCPv6 lease lost Feb 13 21:23:49.296695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 21:23:49.315202 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 21:23:49.315482 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 21:23:49.334460 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 21:23:49.334818 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 21:23:49.354775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 21:23:49.354890 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:49.396242 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 21:23:49.412309 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 21:23:49.412559 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:23:49.432597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 21:23:49.432767 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:49.450596 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 21:23:49.450755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:49.470574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 21:23:49.470739 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:49.478961 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:49.511494 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 21:23:49.511967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:49.544644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 21:23:49.544682 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:49.565283 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 21:23:49.565310 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:49.587440 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 21:23:49.587526 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:23:49.626310 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 21:23:49.626602 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 21:23:49.666300 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:23:49.666576 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:49.712568 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 21:23:49.718562 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 21:23:49.718714 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:49.739405 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 21:23:49.739431 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:23:49.757327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 21:23:49.997329 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Feb 13 21:23:49.757352 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:49.787489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:23:49.787627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:49.810622 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 21:23:49.810966 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 21:23:49.831241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 21:23:49.831487 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 21:23:49.851341 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 21:23:49.887526 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 21:23:49.936277 systemd[1]: Switching root. Feb 13 21:23:50.093190 systemd-journald[266]: Journal stopped Feb 13 21:23:52.664845 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 21:23:52.664859 kernel: SELinux: policy capability open_perms=1 Feb 13 21:23:52.664866 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 21:23:52.664873 kernel: SELinux: policy capability always_check_network=0 Feb 13 21:23:52.664878 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 21:23:52.664884 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 21:23:52.664890 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 21:23:52.664895 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 21:23:52.664901 kernel: audit: type=1403 audit(1739481830.294:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 21:23:52.664907 systemd[1]: Successfully loaded SELinux policy in 161.087ms. Feb 13 21:23:52.664915 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.105ms. Feb 13 21:23:52.664922 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:23:52.664928 systemd[1]: Detected architecture x86-64. Feb 13 21:23:52.664934 systemd[1]: Detected first boot. Feb 13 21:23:52.664941 systemd[1]: Hostname set to . Feb 13 21:23:52.664949 systemd[1]: Initializing machine ID from random generator. Feb 13 21:23:52.664955 zram_generator::config[1266]: No configuration found. Feb 13 21:23:52.664962 systemd[1]: Populated /etc with preset unit settings. Feb 13 21:23:52.664968 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 21:23:52.664975 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 21:23:52.664981 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 21:23:52.664988 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 21:23:52.664995 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 21:23:52.665001 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 21:23:52.665008 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 21:23:52.665015 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 21:23:52.665021 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 21:23:52.665028 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 21:23:52.665035 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 21:23:52.665042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:52.665049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:52.665055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 21:23:52.665062 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 21:23:52.665068 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 21:23:52.665075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:23:52.665081 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Feb 13 21:23:52.665088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:52.665095 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 21:23:52.665105 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 21:23:52.665111 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 21:23:52.665120 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 21:23:52.665126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:52.665133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:23:52.665140 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:23:52.665148 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:23:52.665155 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 21:23:52.665162 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 21:23:52.665169 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:52.665175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:52.665182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:52.665190 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 21:23:52.665197 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 21:23:52.665204 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 21:23:52.665211 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 21:23:52.665218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.665225 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 21:23:52.665232 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 21:23:52.665240 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 21:23:52.665247 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 21:23:52.665254 systemd[1]: Reached target machines.target - Containers. Feb 13 21:23:52.665261 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 21:23:52.665267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:52.665274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:23:52.665281 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 21:23:52.665288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:23:52.665295 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:23:52.665303 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:23:52.665310 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 21:23:52.665316 kernel: ACPI: bus type drm_connector registered Feb 13 21:23:52.665323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:23:52.665329 kernel: fuse: init (API version 7.39) Feb 13 21:23:52.665336 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 21:23:52.665343 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 21:23:52.665349 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 21:23:52.665357 kernel: loop: module loaded Feb 13 21:23:52.665363 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 21:23:52.665370 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 21:23:52.665377 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:23:52.665391 systemd-journald[1369]: Collecting audit messages is disabled. Feb 13 21:23:52.665407 systemd-journald[1369]: Journal started Feb 13 21:23:52.665421 systemd-journald[1369]: Runtime Journal (/run/log/journal/8d41c29751484adc9082d2980cd333ea) is 8.0M, max 639.9M, 631.9M free. Feb 13 21:23:50.811859 systemd[1]: Queued start job for default target multi-user.target. Feb 13 21:23:50.828506 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 21:23:50.828779 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 21:23:52.693107 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:23:52.726140 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 21:23:52.760166 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 21:23:52.793147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:23:52.826481 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 21:23:52.826509 systemd[1]: Stopped verity-setup.service. Feb 13 21:23:52.889148 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.910303 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:23:52.919782 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 21:23:52.930393 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 21:23:52.940383 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 21:23:52.950405 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 21:23:52.960334 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 21:23:52.970374 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 21:23:52.980520 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 21:23:52.992500 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:53.004685 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 21:23:53.004909 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 21:23:53.018044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:23:53.018454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:23:53.030034 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:23:53.030441 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:23:53.041020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:23:53.041419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:23:53.053052 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 21:23:53.053447 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 21:23:53.064012 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:23:53.064454 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:23:53.075046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:53.087093 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 21:23:53.099981 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 21:23:53.112974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:53.133444 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 21:23:53.150313 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 21:23:53.161008 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 21:23:53.171292 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 21:23:53.171321 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:23:53.182421 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 21:23:53.212421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 21:23:53.224132 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 21:23:53.234357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:53.235578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 21:23:53.246109 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 21:23:53.257248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:23:53.257941 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 21:23:53.263564 systemd-journald[1369]: Time spent on flushing to /var/log/journal/8d41c29751484adc9082d2980cd333ea is 13.483ms for 1371 entries. Feb 13 21:23:53.263564 systemd-journald[1369]: System Journal (/var/log/journal/8d41c29751484adc9082d2980cd333ea) is 8.0M, max 195.6M, 187.6M free. Feb 13 21:23:53.309523 systemd-journald[1369]: Received client request to flush runtime journal. Feb 13 21:23:53.275296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:23:53.275895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:23:53.287127 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 21:23:53.307919 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:23:53.324923 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 21:23:53.333107 kernel: loop0: detected capacity change from 0 to 8 Feb 13 21:23:53.333715 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 21:23:53.358169 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 21:23:53.368339 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 21:23:53.374758 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Feb 13 21:23:53.374768 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Feb 13 21:23:53.379369 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 21:23:53.390339 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 21:23:53.407329 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 21:23:53.412148 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 21:23:53.423322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:53.433358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:23:53.447048 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 21:23:53.469318 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 21:23:53.488149 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 21:23:53.498914 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 21:23:53.508711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 21:23:53.509166 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 21:23:53.520688 udevadm[1405]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 21:23:53.534068 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 21:23:53.569136 kernel: loop3: detected capacity change from 0 to 142488 Feb 13 21:23:53.570264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:23:53.577845 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Feb 13 21:23:53.577859 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Feb 13 21:23:53.581406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:53.615796 ldconfig[1395]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 21:23:53.617033 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 21:23:53.645155 kernel: loop4: detected capacity change from 0 to 8 Feb 13 21:23:53.665160 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 21:23:53.698065 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 21:23:53.705149 kernel: loop6: detected capacity change from 0 to 140768 Feb 13 21:23:53.736140 kernel: loop7: detected capacity change from 0 to 142488 Feb 13 21:23:53.741265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:53.747097 (sd-merge)[1430]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Feb 13 21:23:53.747328 (sd-merge)[1430]: Merged extensions into '/usr'. Feb 13 21:23:53.753559 systemd-udevd[1433]: Using default interface naming scheme 'v255'. Feb 13 21:23:53.753866 systemd[1]: Reloading requested from client PID 1400 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 21:23:53.753873 systemd[1]: Reloading... Feb 13 21:23:53.792111 zram_generator::config[1469]: No configuration found. Feb 13 21:23:53.792162 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1444) Feb 13 21:23:53.820109 kernel: IPMI message handler: version 39.2 Feb 13 21:23:53.820151 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 21:23:53.853111 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 21:23:53.853192 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 21:23:53.862980 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 21:23:53.913116 kernel: ipmi device interface Feb 13 21:23:53.913189 kernel: ACPI: button: Power Button [PWRF] Feb 13 21:23:53.949107 kernel: ipmi_si: IPMI System Interface driver Feb 13 21:23:53.949155 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 21:23:53.960285 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 21:23:53.960377 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 21:23:53.960464 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 21:23:53.960537 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 21:23:53.960617 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 21:23:54.067398 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 21:23:54.067415 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 21:23:54.067428 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 21:23:54.172962 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 21:23:54.173059 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 21:23:54.173151 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 21:23:54.173167 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 21:23:53.955158 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:23:54.013280 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Feb 13 21:23:54.013379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 21:23:54.091381 systemd[1]: Reloading finished in 337 ms. Feb 13 21:23:54.232104 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 21:23:54.232131 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 21:23:54.301881 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 21:23:54.330941 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 21:23:54.331026 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 21:23:54.340240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:54.358363 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 21:23:54.367159 kernel: intel_rapl_common: Found RAPL domain package Feb 13 21:23:54.367419 kernel: intel_rapl_common: Found RAPL domain core Feb 13 21:23:54.383691 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 21:23:54.383801 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 21:23:54.426106 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 21:23:54.437404 systemd[1]: Starting ensure-sysext.service... Feb 13 21:23:54.445739 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 21:23:54.458080 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:23:54.468793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:23:54.469448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:54.469727 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 21:23:54.471837 systemd[1]: Reloading requested from client PID 1608 ('systemctl') (unit ensure-sysext.service)... Feb 13 21:23:54.471844 systemd[1]: Reloading... Feb 13 21:23:54.509114 zram_generator::config[1639]: No configuration found. Feb 13 21:23:54.535454 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 21:23:54.535672 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 21:23:54.536180 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 21:23:54.536357 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Feb 13 21:23:54.536400 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Feb 13 21:23:54.538288 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:23:54.538292 systemd-tmpfiles[1612]: Skipping /boot Feb 13 21:23:54.542516 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:23:54.542520 systemd-tmpfiles[1612]: Skipping /boot Feb 13 21:23:54.573019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:23:54.628062 systemd[1]: Reloading finished in 155 ms. Feb 13 21:23:54.658360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 21:23:54.670371 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:54.681269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:54.709336 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 21:23:54.721223 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 21:23:54.727039 augenrules[1722]: No rules Feb 13 21:23:54.732862 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 21:23:54.757506 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 21:23:54.764152 lvm[1727]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:23:54.770259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:23:54.781825 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 21:23:54.794016 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 21:23:54.804822 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 21:23:54.815316 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 21:23:54.826468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 21:23:54.838384 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 21:23:54.849389 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 21:23:54.849928 systemd-networkd[1610]: lo: Link UP Feb 13 21:23:54.849931 systemd-networkd[1610]: lo: Gained carrier Feb 13 21:23:54.852482 systemd-networkd[1610]: bond0: netdev ready Feb 13 21:23:54.853384 systemd-networkd[1610]: Enumeration completed Feb 13 21:23:54.854448 systemd-networkd[1610]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:c5:3c.network. Feb 13 21:23:54.861295 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:23:54.876097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:54.886215 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:54.886360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:54.893831 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 21:23:54.904818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:23:54.906712 lvm[1747]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:23:54.914907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:23:54.922329 systemd-resolved[1729]: Positive Trust Anchors: Feb 13 21:23:54.922335 systemd-resolved[1729]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:23:54.922359 systemd-resolved[1729]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:23:54.924989 systemd-resolved[1729]: Using system hostname 'ci-4081.3.1-a-e8b80a8c0e'. Feb 13 21:23:54.928044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:23:54.938359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:54.939154 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 21:23:54.951039 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 21:23:54.961288 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 21:23:54.961409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:54.963023 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 21:23:54.974788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:23:54.974910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:23:54.986921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:23:54.987067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:23:54.999429 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:23:54.999675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:23:55.015723 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 21:23:55.030135 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 21:23:55.051959 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 21:23:55.064179 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 21:23:55.064184 systemd-networkd[1610]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:c5:3d.network. Feb 13 21:23:55.087539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:55.088152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:55.103762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:23:55.115976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:23:55.129913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:23:55.139476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:55.139845 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 21:23:55.140090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:55.142880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:23:55.143251 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:23:55.155592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:23:55.155929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:23:55.168491 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:23:55.168832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:23:55.189582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:55.190245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:55.215770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:23:55.227351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:23:55.243932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:23:55.259180 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 21:23:55.260265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:23:55.286766 systemd-networkd[1610]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 21:23:55.286997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:55.287218 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 21:23:55.287294 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 21:23:55.287485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:55.288463 systemd-networkd[1610]: enp1s0f0np0: Link UP Feb 13 21:23:55.288993 systemd-networkd[1610]: enp1s0f0np0: Gained carrier Feb 13 21:23:55.289330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:23:55.315217 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 21:23:55.325345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:23:55.325613 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:23:55.329672 systemd-networkd[1610]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:c5:3c.network. Feb 13 21:23:55.330297 systemd-networkd[1610]: enp1s0f1np1: Link UP Feb 13 21:23:55.330848 systemd-networkd[1610]: enp1s0f1np1: Gained carrier Feb 13 21:23:55.337172 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:23:55.337441 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:23:55.347706 systemd-networkd[1610]: bond0: Link UP Feb 13 21:23:55.348400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:23:55.348633 systemd-networkd[1610]: bond0: Gained carrier Feb 13 21:23:55.348739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:23:55.361430 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:23:55.361770 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:23:55.372128 systemd[1]: Finished ensure-sysext.service. Feb 13 21:23:55.381538 systemd[1]: Reached target network.target - Network. Feb 13 21:23:55.390187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:55.410149 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:23:55.410184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:23:55.415139 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 21:23:55.415168 kernel: bond0: active interface up! Feb 13 21:23:55.429195 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 21:23:55.475471 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 21:23:55.486269 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:23:55.496229 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 21:23:55.507194 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 21:23:55.518183 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 21:23:55.537137 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 21:23:55.537153 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:23:55.543131 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 21:23:55.551166 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 21:23:55.561243 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 21:23:55.571223 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 21:23:55.582166 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:23:55.590583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 21:23:55.600799 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 21:23:55.612953 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 21:23:55.622399 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 21:23:55.633188 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:23:55.643131 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:23:55.651151 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:23:55.651167 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:23:55.662216 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 21:23:55.673765 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 21:23:55.691204 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 21:23:55.699879 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 21:23:55.702275 dbus-daemon[1781]: [system] SELinux support is enabled Feb 13 21:23:55.704638 coreos-metadata[1780]: Feb 13 21:23:55.704 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:23:55.709751 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 21:23:55.711475 jq[1784]: false Feb 13 21:23:55.719210 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 21:23:55.719769 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 21:23:55.727011 extend-filesystems[1786]: Found loop4 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found loop5 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found loop6 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found loop7 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda1 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda2 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda3 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found usr Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda4 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda6 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda7 Feb 13 21:23:55.729264 extend-filesystems[1786]: Found sda9 Feb 13 21:23:55.729264 extend-filesystems[1786]: Checking size of /dev/sda9 Feb 13 21:23:55.893297 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 21:23:55.893322 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1532) Feb 13 21:23:55.729901 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 21:23:55.893393 extend-filesystems[1786]: Resized partition /dev/sda9 Feb 13 21:23:55.775574 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 21:23:55.901396 extend-filesystems[1794]: resize2fs 1.47.1 (20-May-2024) Feb 13 21:23:55.813590 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 21:23:55.834579 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 21:23:55.873241 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Feb 13 21:23:55.893428 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 21:23:55.893815 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 21:23:55.894611 systemd-logind[1806]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 21:23:55.894621 systemd-logind[1806]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 21:23:55.894631 systemd-logind[1806]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 21:23:55.894777 systemd-logind[1806]: New seat seat0. Feb 13 21:23:55.925199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 21:23:55.926777 jq[1812]: true Feb 13 21:23:55.932336 update_engine[1811]: I20250213 21:23:55.932300 1811 main.cc:92] Flatcar Update Engine starting Feb 13 21:23:55.933091 update_engine[1811]: I20250213 21:23:55.933073 1811 update_check_scheduler.cc:74] Next update check in 9m5s Feb 13 21:23:55.936369 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 21:23:55.937306 sshd_keygen[1809]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 21:23:55.947360 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 21:23:55.973276 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 21:23:55.973378 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 21:23:55.973551 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 21:23:55.973641 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 21:23:55.984551 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 21:23:55.984636 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 21:23:55.996339 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 21:23:56.009110 (ntainerd)[1824]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 21:23:56.010618 jq[1823]: true Feb 13 21:23:56.012358 dbus-daemon[1781]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 21:23:56.013425 tar[1821]: linux-amd64/helm Feb 13 21:23:56.018169 systemd[1]: Started update-engine.service - Update Engine. Feb 13 21:23:56.028319 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 21:23:56.028419 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Feb 13 21:23:56.033072 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 21:23:56.041203 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 21:23:56.041299 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 21:23:56.052224 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 21:23:56.052361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 21:23:56.068689 bash[1851]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:23:56.078338 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 21:23:56.092787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 21:23:56.096619 locksmithd[1859]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 21:23:56.103513 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 21:23:56.103605 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 21:23:56.126348 systemd[1]: Starting sshkeys.service... Feb 13 21:23:56.133874 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 21:23:56.145999 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 21:23:56.157949 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 21:23:56.169479 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 21:23:56.181329 coreos-metadata[1874]: Feb 13 21:23:56.181 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:23:56.181736 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 21:23:56.183664 containerd[1824]: time="2025-02-13T21:23:56.183623008Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 21:23:56.191032 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Feb 13 21:23:56.196729 containerd[1824]: time="2025-02-13T21:23:56.196709602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197434 containerd[1824]: time="2025-02-13T21:23:56.197419116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197457 containerd[1824]: time="2025-02-13T21:23:56.197434631Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 21:23:56.197457 containerd[1824]: time="2025-02-13T21:23:56.197444588Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 21:23:56.197548 containerd[1824]: time="2025-02-13T21:23:56.197529068Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 21:23:56.197548 containerd[1824]: time="2025-02-13T21:23:56.197540022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197597 containerd[1824]: time="2025-02-13T21:23:56.197574654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197597 containerd[1824]: time="2025-02-13T21:23:56.197583022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197707 containerd[1824]: time="2025-02-13T21:23:56.197672637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197707 containerd[1824]: time="2025-02-13T21:23:56.197682367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197707 containerd[1824]: time="2025-02-13T21:23:56.197689883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197707 containerd[1824]: time="2025-02-13T21:23:56.197695285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197768 containerd[1824]: time="2025-02-13T21:23:56.197737602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:56.197984 containerd[1824]: time="2025-02-13T21:23:56.197975792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:56.198037 containerd[1824]: time="2025-02-13T21:23:56.198029229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:56.198057 containerd[1824]: time="2025-02-13T21:23:56.198038128Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 21:23:56.198089 containerd[1824]: time="2025-02-13T21:23:56.198082085Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 21:23:56.198162 containerd[1824]: time="2025-02-13T21:23:56.198113449Z" level=info msg="metadata content store policy set" policy=shared Feb 13 21:23:56.200304 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 21:23:56.210963 containerd[1824]: time="2025-02-13T21:23:56.210947384Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 21:23:56.211015 containerd[1824]: time="2025-02-13T21:23:56.210977154Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 21:23:56.211015 containerd[1824]: time="2025-02-13T21:23:56.210993566Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 21:23:56.211015 containerd[1824]: time="2025-02-13T21:23:56.211009058Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 21:23:56.211090 containerd[1824]: time="2025-02-13T21:23:56.211023445Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 21:23:56.211134 containerd[1824]: time="2025-02-13T21:23:56.211122567Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 21:23:56.211315 containerd[1824]: time="2025-02-13T21:23:56.211272819Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 21:23:56.211395 containerd[1824]: time="2025-02-13T21:23:56.211347807Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 21:23:56.211395 containerd[1824]: time="2025-02-13T21:23:56.211358622Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 21:23:56.211395 containerd[1824]: time="2025-02-13T21:23:56.211366683Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 21:23:56.211395 containerd[1824]: time="2025-02-13T21:23:56.211378863Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211395 containerd[1824]: time="2025-02-13T21:23:56.211392308Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211400583Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211408623Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211416422Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211423622Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211430275Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211436478Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211451907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211461848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211468689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211475993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211483 containerd[1824]: time="2025-02-13T21:23:56.211482710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211490223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211499290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211506723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211518386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211529008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211535990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211542958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211559290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211568310Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211583368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211593942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211599960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 21:23:56.211630 containerd[1824]: time="2025-02-13T21:23:56.211624417Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 21:23:56.211820 containerd[1824]: time="2025-02-13T21:23:56.211633919Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 21:23:56.211820 containerd[1824]: time="2025-02-13T21:23:56.211641618Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 21:23:56.211820 containerd[1824]: time="2025-02-13T21:23:56.211653024Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 21:23:56.211820 containerd[1824]: time="2025-02-13T21:23:56.211660401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211820 containerd[1824]: time="2025-02-13T21:23:56.211667661Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 21:23:56.211820 containerd[1824]: time="2025-02-13T21:23:56.211675935Z" level=info msg="NRI interface is disabled by configuration." Feb 13 21:23:56.211820 containerd[1824]: time="2025-02-13T21:23:56.211682402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 21:23:56.211917 containerd[1824]: time="2025-02-13T21:23:56.211854242Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 21:23:56.211917 containerd[1824]: time="2025-02-13T21:23:56.211888983Z" level=info msg="Connect containerd service" Feb 13 21:23:56.211917 containerd[1824]: time="2025-02-13T21:23:56.211910142Z" level=info msg="using legacy CRI server" Feb 13 21:23:56.212019 containerd[1824]: time="2025-02-13T21:23:56.211917380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 21:23:56.212019 containerd[1824]: time="2025-02-13T21:23:56.211969246Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 21:23:56.212344 containerd[1824]: time="2025-02-13T21:23:56.212304237Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 21:23:56.212436 containerd[1824]: time="2025-02-13T21:23:56.212418264Z" level=info msg="Start subscribing containerd event" Feb 13 21:23:56.212469 containerd[1824]: time="2025-02-13T21:23:56.212445585Z" level=info msg="Start recovering state" Feb 13 21:23:56.212497 containerd[1824]: time="2025-02-13T21:23:56.212467385Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 21:23:56.212525 containerd[1824]: time="2025-02-13T21:23:56.212498884Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 21:23:56.212525 containerd[1824]: time="2025-02-13T21:23:56.212500883Z" level=info msg="Start event monitor" Feb 13 21:23:56.212525 containerd[1824]: time="2025-02-13T21:23:56.212518620Z" level=info msg="Start snapshots syncer" Feb 13 21:23:56.212603 containerd[1824]: time="2025-02-13T21:23:56.212526597Z" level=info msg="Start cni network conf syncer for default" Feb 13 21:23:56.212603 containerd[1824]: time="2025-02-13T21:23:56.212533341Z" level=info msg="Start streaming server" Feb 13 21:23:56.212603 containerd[1824]: time="2025-02-13T21:23:56.212576458Z" level=info msg="containerd successfully booted in 0.029787s" Feb 13 21:23:56.212603 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 21:23:56.276109 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 21:23:56.300734 extend-filesystems[1794]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 21:23:56.300734 extend-filesystems[1794]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 21:23:56.300734 extend-filesystems[1794]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 21:23:56.329303 extend-filesystems[1786]: Resized filesystem in /dev/sda9 Feb 13 21:23:56.329303 extend-filesystems[1786]: Found sdb Feb 13 21:23:56.348575 tar[1821]: linux-amd64/LICENSE Feb 13 21:23:56.348575 tar[1821]: linux-amd64/README.md Feb 13 21:23:56.301602 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 21:23:56.301704 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 21:23:56.358026 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 21:23:56.897158 systemd-networkd[1610]: bond0: Gained IPv6LL Feb 13 21:23:56.898845 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 21:23:56.910677 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 21:23:56.935269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:23:56.945839 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 21:23:56.964272 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 21:23:57.567917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:23:57.579780 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:23:57.999896 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Feb 13 21:23:58.000047 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Feb 13 21:23:58.010044 kubelet[1914]: E0213 21:23:58.009996 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:23:58.011054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:23:58.011171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:23:59.252775 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 21:23:59.272391 systemd[1]: Started sshd@0-147.28.180.221:22-139.178.89.65:49994.service - OpenSSH per-connection server daemon (139.178.89.65:49994). Feb 13 21:23:59.317622 sshd[1936]: Accepted publickey for core from 139.178.89.65 port 49994 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:23:59.318773 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:23:59.324157 systemd-logind[1806]: New session 1 of user core. Feb 13 21:23:59.325001 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 21:23:59.348442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 21:23:59.361823 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 21:23:59.384435 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 21:23:59.403154 (systemd)[1940]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 21:23:59.492556 systemd[1940]: Queued start job for default target default.target. Feb 13 21:23:59.500779 systemd[1940]: Created slice app.slice - User Application Slice. Feb 13 21:23:59.500792 systemd[1940]: Reached target paths.target - Paths. Feb 13 21:23:59.500800 systemd[1940]: Reached target timers.target - Timers. Feb 13 21:23:59.501424 systemd[1940]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 21:23:59.506856 systemd[1940]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 21:23:59.506884 systemd[1940]: Reached target sockets.target - Sockets. Feb 13 21:23:59.506892 systemd[1940]: Reached target basic.target - Basic System. Feb 13 21:23:59.506912 systemd[1940]: Reached target default.target - Main User Target. Feb 13 21:23:59.506927 systemd[1940]: Startup finished in 99ms. Feb 13 21:23:59.507043 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 21:23:59.524464 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 21:23:59.598463 systemd[1]: Started sshd@1-147.28.180.221:22-139.178.89.65:50002.service - OpenSSH per-connection server daemon (139.178.89.65:50002). Feb 13 21:23:59.638733 sshd[1951]: Accepted publickey for core from 139.178.89.65 port 50002 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:23:59.639365 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:23:59.641738 systemd-logind[1806]: New session 2 of user core. Feb 13 21:23:59.656253 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 21:23:59.715414 sshd[1951]: pam_unix(sshd:session): session closed for user core Feb 13 21:23:59.735236 systemd[1]: sshd@1-147.28.180.221:22-139.178.89.65:50002.service: Deactivated successfully. Feb 13 21:23:59.736197 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 21:23:59.737103 systemd-logind[1806]: Session 2 logged out. Waiting for processes to exit. Feb 13 21:23:59.738015 systemd[1]: Started sshd@2-147.28.180.221:22-139.178.89.65:50010.service - OpenSSH per-connection server daemon (139.178.89.65:50010). Feb 13 21:23:59.751476 systemd-logind[1806]: Removed session 2. Feb 13 21:23:59.783508 sshd[1958]: Accepted publickey for core from 139.178.89.65 port 50010 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:23:59.784380 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:23:59.787911 systemd-logind[1806]: New session 3 of user core. Feb 13 21:23:59.805705 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 21:23:59.886367 sshd[1958]: pam_unix(sshd:session): session closed for user core Feb 13 21:23:59.892749 systemd[1]: sshd@2-147.28.180.221:22-139.178.89.65:50010.service: Deactivated successfully. Feb 13 21:23:59.896611 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 21:23:59.898095 systemd-logind[1806]: Session 3 logged out. Waiting for processes to exit. Feb 13 21:23:59.898803 systemd-logind[1806]: Removed session 3. Feb 13 21:24:00.729630 systemd-timesyncd[1775]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Feb 13 21:24:00.729667 systemd-timesyncd[1775]: Initial clock synchronization to Thu 2025-02-13 21:24:00.445588 UTC. Feb 13 21:24:01.257169 login[1884]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:24:01.257735 login[1888]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:24:01.259781 systemd-logind[1806]: New session 5 of user core. Feb 13 21:24:01.277317 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 21:24:01.278905 systemd-logind[1806]: New session 4 of user core. Feb 13 21:24:01.279651 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 21:24:01.320429 coreos-metadata[1780]: Feb 13 21:24:01.320 INFO Fetch successful Feb 13 21:24:01.381203 coreos-metadata[1874]: Feb 13 21:24:01.381 INFO Fetch successful Feb 13 21:24:01.405788 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 21:24:01.407107 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Feb 13 21:24:01.414096 unknown[1874]: wrote ssh authorized keys file for user: core Feb 13 21:24:01.442363 update-ssh-keys[1997]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:24:01.442871 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 21:24:01.443708 systemd[1]: Finished sshkeys.service. Feb 13 21:24:01.734476 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Feb 13 21:24:01.737016 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 21:24:01.737622 systemd[1]: Startup finished in 2.666s (kernel) + 23.286s (initrd) + 11.602s (userspace) = 37.556s. Feb 13 21:24:08.149607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 21:24:08.168346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:08.355916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:08.358072 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:24:08.411357 kubelet[2009]: E0213 21:24:08.411078 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:24:08.417264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:24:08.417338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:24:09.718576 systemd[1]: Started sshd@3-147.28.180.221:22-139.178.89.65:43756.service - OpenSSH per-connection server daemon (139.178.89.65:43756). Feb 13 21:24:09.748961 sshd[2029]: Accepted publickey for core from 139.178.89.65 port 43756 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:24:09.749760 sshd[2029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:09.752965 systemd-logind[1806]: New session 6 of user core. Feb 13 21:24:09.774457 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 21:24:09.829415 sshd[2029]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:09.841792 systemd[1]: sshd@3-147.28.180.221:22-139.178.89.65:43756.service: Deactivated successfully. Feb 13 21:24:09.842565 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 21:24:09.843345 systemd-logind[1806]: Session 6 logged out. Waiting for processes to exit. Feb 13 21:24:09.844055 systemd[1]: Started sshd@4-147.28.180.221:22-139.178.89.65:43758.service - OpenSSH per-connection server daemon (139.178.89.65:43758). Feb 13 21:24:09.844647 systemd-logind[1806]: Removed session 6. Feb 13 21:24:09.882708 sshd[2036]: Accepted publickey for core from 139.178.89.65 port 43758 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:24:09.884566 sshd[2036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:09.892931 systemd-logind[1806]: New session 7 of user core. Feb 13 21:24:09.911611 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 21:24:09.973248 sshd[2036]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:09.989049 systemd[1]: sshd@4-147.28.180.221:22-139.178.89.65:43758.service: Deactivated successfully. Feb 13 21:24:09.992806 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 21:24:09.996393 systemd-logind[1806]: Session 7 logged out. Waiting for processes to exit. Feb 13 21:24:10.013416 systemd[1]: Started sshd@5-147.28.180.221:22-139.178.89.65:43764.service - OpenSSH per-connection server daemon (139.178.89.65:43764). Feb 13 21:24:10.013942 systemd-logind[1806]: Removed session 7. Feb 13 21:24:10.043031 sshd[2043]: Accepted publickey for core from 139.178.89.65 port 43764 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:24:10.044085 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:10.047918 systemd-logind[1806]: New session 8 of user core. Feb 13 21:24:10.059360 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 21:24:10.124740 sshd[2043]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:10.143825 systemd[1]: sshd@5-147.28.180.221:22-139.178.89.65:43764.service: Deactivated successfully. Feb 13 21:24:10.144952 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 21:24:10.145618 systemd-logind[1806]: Session 8 logged out. Waiting for processes to exit. Feb 13 21:24:10.146225 systemd[1]: Started sshd@6-147.28.180.221:22-139.178.89.65:43776.service - OpenSSH per-connection server daemon (139.178.89.65:43776). Feb 13 21:24:10.146628 systemd-logind[1806]: Removed session 8. Feb 13 21:24:10.179367 sshd[2051]: Accepted publickey for core from 139.178.89.65 port 43776 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:24:10.180392 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:10.184106 systemd-logind[1806]: New session 9 of user core. Feb 13 21:24:10.193341 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 21:24:10.255308 sudo[2054]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 21:24:10.255458 sudo[2054]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:10.274849 sudo[2054]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:10.276096 sshd[2051]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:10.293194 systemd[1]: sshd@6-147.28.180.221:22-139.178.89.65:43776.service: Deactivated successfully. Feb 13 21:24:10.297013 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 21:24:10.300607 systemd-logind[1806]: Session 9 logged out. Waiting for processes to exit. Feb 13 21:24:10.315423 systemd[1]: Started sshd@7-147.28.180.221:22-139.178.89.65:43780.service - OpenSSH per-connection server daemon (139.178.89.65:43780). Feb 13 21:24:10.315907 systemd-logind[1806]: Removed session 9. Feb 13 21:24:10.346438 sshd[2059]: Accepted publickey for core from 139.178.89.65 port 43780 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:24:10.347274 sshd[2059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:10.350309 systemd-logind[1806]: New session 10 of user core. Feb 13 21:24:10.360321 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 21:24:10.419561 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 21:24:10.419712 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:10.421811 sudo[2063]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:10.424350 sudo[2062]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 21:24:10.424498 sudo[2062]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:10.437441 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 21:24:10.438447 auditctl[2066]: No rules Feb 13 21:24:10.438655 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 21:24:10.438771 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 21:24:10.440342 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 21:24:10.463322 augenrules[2084]: No rules Feb 13 21:24:10.464045 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 21:24:10.465190 sudo[2062]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:10.467157 sshd[2059]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:10.484897 systemd[1]: sshd@7-147.28.180.221:22-139.178.89.65:43780.service: Deactivated successfully. Feb 13 21:24:10.488768 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 21:24:10.492145 systemd-logind[1806]: Session 10 logged out. Waiting for processes to exit. Feb 13 21:24:10.508849 systemd[1]: Started sshd@8-147.28.180.221:22-139.178.89.65:43788.service - OpenSSH per-connection server daemon (139.178.89.65:43788). Feb 13 21:24:10.511665 systemd-logind[1806]: Removed session 10. Feb 13 21:24:10.569736 sshd[2092]: Accepted publickey for core from 139.178.89.65 port 43788 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:24:10.570424 sshd[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:10.573061 systemd-logind[1806]: New session 11 of user core. Feb 13 21:24:10.584350 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 21:24:10.636351 sudo[2095]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 21:24:10.636554 sudo[2095]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:10.908406 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 21:24:10.908465 (dockerd)[2122]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 21:24:11.244298 dockerd[2122]: time="2025-02-13T21:24:11.244208868Z" level=info msg="Starting up" Feb 13 21:24:11.411263 dockerd[2122]: time="2025-02-13T21:24:11.411216115Z" level=info msg="Loading containers: start." Feb 13 21:24:11.505184 kernel: Initializing XFRM netlink socket Feb 13 21:24:11.552278 systemd-networkd[1610]: docker0: Link UP Feb 13 21:24:11.564936 dockerd[2122]: time="2025-02-13T21:24:11.564888322Z" level=info msg="Loading containers: done." Feb 13 21:24:11.574139 dockerd[2122]: time="2025-02-13T21:24:11.574122132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 21:24:11.574209 dockerd[2122]: time="2025-02-13T21:24:11.574170890Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 21:24:11.574232 dockerd[2122]: time="2025-02-13T21:24:11.574220930Z" level=info msg="Daemon has completed initialization" Feb 13 21:24:11.588032 dockerd[2122]: time="2025-02-13T21:24:11.587979457Z" level=info msg="API listen on /run/docker.sock" Feb 13 21:24:11.588072 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 21:24:12.345929 containerd[1824]: time="2025-02-13T21:24:12.345906601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 21:24:12.932732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387354626.mount: Deactivated successfully. Feb 13 21:24:14.142011 containerd[1824]: time="2025-02-13T21:24:14.141958397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:14.142227 containerd[1824]: time="2025-02-13T21:24:14.142066276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 21:24:14.142631 containerd[1824]: time="2025-02-13T21:24:14.142591712Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:14.144055 containerd[1824]: time="2025-02-13T21:24:14.144014811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:14.144720 containerd[1824]: time="2025-02-13T21:24:14.144679830Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.79875084s" Feb 13 21:24:14.144720 containerd[1824]: time="2025-02-13T21:24:14.144696605Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 21:24:14.145888 containerd[1824]: time="2025-02-13T21:24:14.145875869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 21:24:15.744825 containerd[1824]: time="2025-02-13T21:24:15.744799510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:15.745057 containerd[1824]: time="2025-02-13T21:24:15.745041462Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 21:24:15.745430 containerd[1824]: time="2025-02-13T21:24:15.745416936Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:15.746969 containerd[1824]: time="2025-02-13T21:24:15.746955561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:15.748005 containerd[1824]: time="2025-02-13T21:24:15.747990690Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.602096998s" Feb 13 21:24:15.748036 containerd[1824]: time="2025-02-13T21:24:15.748007987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 21:24:15.748243 containerd[1824]: time="2025-02-13T21:24:15.748231448Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 21:24:16.933266 containerd[1824]: time="2025-02-13T21:24:16.933208915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:16.933473 containerd[1824]: time="2025-02-13T21:24:16.933409591Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 21:24:16.933802 containerd[1824]: time="2025-02-13T21:24:16.933763216Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:16.935404 containerd[1824]: time="2025-02-13T21:24:16.935363694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:16.936033 containerd[1824]: time="2025-02-13T21:24:16.935992962Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.187746622s" Feb 13 21:24:16.936033 containerd[1824]: time="2025-02-13T21:24:16.936007481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 21:24:16.936285 containerd[1824]: time="2025-02-13T21:24:16.936273546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 21:24:17.742236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890681344.mount: Deactivated successfully. Feb 13 21:24:18.065110 containerd[1824]: time="2025-02-13T21:24:18.065010207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:18.065349 containerd[1824]: time="2025-02-13T21:24:18.065313887Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 21:24:18.066089 containerd[1824]: time="2025-02-13T21:24:18.066073323Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:18.067104 containerd[1824]: time="2025-02-13T21:24:18.067086303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:18.067523 containerd[1824]: time="2025-02-13T21:24:18.067511465Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.131222453s" Feb 13 21:24:18.067552 containerd[1824]: time="2025-02-13T21:24:18.067526155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 21:24:18.067745 containerd[1824]: time="2025-02-13T21:24:18.067736168Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 21:24:18.555371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 21:24:18.568812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:18.573474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027231775.mount: Deactivated successfully. Feb 13 21:24:18.809317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:18.811696 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:24:18.840232 kubelet[2380]: E0213 21:24:18.840123 2380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:24:18.841328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:24:18.841438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:24:19.273709 containerd[1824]: time="2025-02-13T21:24:19.273683853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:19.273967 containerd[1824]: time="2025-02-13T21:24:19.273856597Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 21:24:19.274321 containerd[1824]: time="2025-02-13T21:24:19.274309489Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:19.275923 containerd[1824]: time="2025-02-13T21:24:19.275909178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:19.276567 containerd[1824]: time="2025-02-13T21:24:19.276553942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.208803085s" Feb 13 21:24:19.276600 containerd[1824]: time="2025-02-13T21:24:19.276569132Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 21:24:19.276828 containerd[1824]: time="2025-02-13T21:24:19.276804013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 21:24:19.767832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528012221.mount: Deactivated successfully. Feb 13 21:24:19.769484 containerd[1824]: time="2025-02-13T21:24:19.769432481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:19.769689 containerd[1824]: time="2025-02-13T21:24:19.769640394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 21:24:19.770087 containerd[1824]: time="2025-02-13T21:24:19.770047028Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:19.771275 containerd[1824]: time="2025-02-13T21:24:19.771228930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:19.772029 containerd[1824]: time="2025-02-13T21:24:19.771989836Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.155757ms" Feb 13 21:24:19.772029 containerd[1824]: time="2025-02-13T21:24:19.772003767Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 21:24:19.772352 containerd[1824]: time="2025-02-13T21:24:19.772311303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 21:24:20.271075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655548375.mount: Deactivated successfully. Feb 13 21:24:21.295076 containerd[1824]: time="2025-02-13T21:24:21.295019539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:21.295286 containerd[1824]: time="2025-02-13T21:24:21.295219164Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 21:24:21.295693 containerd[1824]: time="2025-02-13T21:24:21.295652992Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:21.297490 containerd[1824]: time="2025-02-13T21:24:21.297447644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:21.298228 containerd[1824]: time="2025-02-13T21:24:21.298185285Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.52585943s" Feb 13 21:24:21.298228 containerd[1824]: time="2025-02-13T21:24:21.298200532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 21:24:23.420963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:23.434482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:23.446516 systemd[1]: Reloading requested from client PID 2539 ('systemctl') (unit session-11.scope)... Feb 13 21:24:23.446523 systemd[1]: Reloading... Feb 13 21:24:23.488173 zram_generator::config[2578]: No configuration found. Feb 13 21:24:23.555266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:24:23.614284 systemd[1]: Reloading finished in 167 ms. Feb 13 21:24:23.649545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:23.650812 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:23.651815 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 21:24:23.651915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:23.652767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:23.868247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:23.870368 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 21:24:23.890279 kubelet[2647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:23.890279 kubelet[2647]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 21:24:23.890279 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:23.891258 kubelet[2647]: I0213 21:24:23.891195 2647 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 21:24:24.036289 kubelet[2647]: I0213 21:24:24.036245 2647 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 21:24:24.036289 kubelet[2647]: I0213 21:24:24.036257 2647 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 21:24:24.036408 kubelet[2647]: I0213 21:24:24.036372 2647 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 21:24:24.054334 kubelet[2647]: I0213 21:24:24.054295 2647 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:24:24.054938 kubelet[2647]: E0213 21:24:24.054898 2647 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.180.221:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:24.059554 kubelet[2647]: E0213 21:24:24.059540 2647 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 21:24:24.059590 kubelet[2647]: I0213 21:24:24.059557 2647 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 21:24:24.071087 kubelet[2647]: I0213 21:24:24.071045 2647 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 21:24:24.071991 kubelet[2647]: I0213 21:24:24.071954 2647 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 21:24:24.072045 kubelet[2647]: I0213 21:24:24.072031 2647 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 21:24:24.072170 kubelet[2647]: I0213 21:24:24.072046 2647 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-e8b80a8c0e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 21:24:24.072170 kubelet[2647]: I0213 21:24:24.072144 2647 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 21:24:24.072170 kubelet[2647]: I0213 21:24:24.072150 2647 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 21:24:24.072268 kubelet[2647]: I0213 21:24:24.072202 2647 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:24.073697 kubelet[2647]: I0213 21:24:24.073670 2647 kubelet.go:408] "Attempting to sync node with API server" Feb 13 21:24:24.073697 kubelet[2647]: I0213 21:24:24.073680 2647 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 21:24:24.073697 kubelet[2647]: I0213 21:24:24.073696 2647 kubelet.go:314] "Adding apiserver pod source" Feb 13 21:24:24.073758 kubelet[2647]: I0213 21:24:24.073703 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 21:24:24.077320 kubelet[2647]: W0213 21:24:24.077265 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:24.077350 kubelet[2647]: E0213 21:24:24.077331 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.180.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:24.078022 kubelet[2647]: I0213 21:24:24.078012 2647 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 21:24:24.078128 kubelet[2647]: W0213 21:24:24.078068 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-e8b80a8c0e&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:24.078187 kubelet[2647]: E0213 21:24:24.078132 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-e8b80a8c0e&limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:24.079617 kubelet[2647]: I0213 21:24:24.079578 2647 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 21:24:24.080105 kubelet[2647]: W0213 21:24:24.080065 2647 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 21:24:24.080419 kubelet[2647]: I0213 21:24:24.080375 2647 server.go:1269] "Started kubelet" Feb 13 21:24:24.080469 kubelet[2647]: I0213 21:24:24.080441 2647 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 21:24:24.080490 kubelet[2647]: I0213 21:24:24.080458 2647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 21:24:24.080650 kubelet[2647]: I0213 21:24:24.080610 2647 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 21:24:24.081117 kubelet[2647]: I0213 21:24:24.081079 2647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 21:24:24.081117 kubelet[2647]: I0213 21:24:24.081084 2647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 21:24:24.081190 kubelet[2647]: E0213 21:24:24.081119 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:24.081190 kubelet[2647]: I0213 21:24:24.081128 2647 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 21:24:24.081190 kubelet[2647]: I0213 21:24:24.081169 2647 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 21:24:24.081274 kubelet[2647]: I0213 21:24:24.081223 2647 reconciler.go:26] "Reconciler: start to sync state" Feb 13 21:24:24.081274 kubelet[2647]: E0213 21:24:24.081250 2647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-e8b80a8c0e?timeout=10s\": dial tcp 147.28.180.221:6443: connect: connection refused" interval="200ms" Feb 13 21:24:24.081312 kubelet[2647]: W0213 21:24:24.081291 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:24.081344 kubelet[2647]: E0213 21:24:24.081316 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:24.081519 kubelet[2647]: I0213 21:24:24.081508 2647 factory.go:221] Registration of the systemd container factory successfully Feb 13 21:24:24.081578 kubelet[2647]: I0213 21:24:24.081567 2647 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 21:24:24.081994 kubelet[2647]: I0213 21:24:24.081984 2647 factory.go:221] Registration of the containerd container factory successfully Feb 13 21:24:24.086931 kubelet[2647]: I0213 21:24:24.086917 2647 server.go:460] "Adding debug handlers to kubelet server" Feb 13 21:24:24.087295 kubelet[2647]: E0213 21:24:24.087280 2647 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 21:24:24.090282 kubelet[2647]: E0213 21:24:24.088377 2647 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.221:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.221:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-e8b80a8c0e.1823e18a09be932b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-e8b80a8c0e,UID:ci-4081.3.1-a-e8b80a8c0e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-e8b80a8c0e,},FirstTimestamp:2025-02-13 21:24:24.080364331 +0000 UTC m=+0.208306937,LastTimestamp:2025-02-13 21:24:24.080364331 +0000 UTC m=+0.208306937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-e8b80a8c0e,}" Feb 13 21:24:24.092982 kubelet[2647]: I0213 21:24:24.092964 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 21:24:24.093556 kubelet[2647]: I0213 21:24:24.093523 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 21:24:24.093556 kubelet[2647]: I0213 21:24:24.093535 2647 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 21:24:24.093556 kubelet[2647]: I0213 21:24:24.093545 2647 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 21:24:24.093622 kubelet[2647]: E0213 21:24:24.093566 2647 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 21:24:24.094821 kubelet[2647]: W0213 21:24:24.094792 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:24.094850 kubelet[2647]: E0213 21:24:24.094826 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.180.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:24.182131 kubelet[2647]: E0213 21:24:24.182032 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:24.194574 kubelet[2647]: E0213 21:24:24.194454 2647 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 21:24:24.231536 kubelet[2647]: I0213 21:24:24.231490 2647 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 21:24:24.231536 kubelet[2647]: I0213 21:24:24.231526 2647 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 21:24:24.231836 kubelet[2647]: I0213 21:24:24.231568 2647 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:24.233638 kubelet[2647]: I0213 21:24:24.233594 2647 policy_none.go:49] "None policy: Start" Feb 13 21:24:24.234192 kubelet[2647]: I0213 21:24:24.234162 2647 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 21:24:24.234192 kubelet[2647]: I0213 21:24:24.234187 2647 state_mem.go:35] "Initializing new in-memory state store" Feb 13 21:24:24.240931 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 21:24:24.256264 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 21:24:24.278945 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 21:24:24.280512 kubelet[2647]: I0213 21:24:24.280450 2647 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 21:24:24.280764 kubelet[2647]: I0213 21:24:24.280712 2647 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 21:24:24.280764 kubelet[2647]: I0213 21:24:24.280729 2647 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 21:24:24.280993 kubelet[2647]: I0213 21:24:24.280966 2647 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 21:24:24.281736 kubelet[2647]: E0213 21:24:24.281664 2647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-e8b80a8c0e?timeout=10s\": dial tcp 147.28.180.221:6443: connect: connection refused" interval="400ms" Feb 13 21:24:24.282114 kubelet[2647]: E0213 21:24:24.282048 2647 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:24.385416 kubelet[2647]: I0213 21:24:24.385358 2647 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.386129 kubelet[2647]: E0213 21:24:24.386002 2647 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.180.221:6443/api/v1/nodes\": dial tcp 147.28.180.221:6443: connect: connection refused" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.415519 systemd[1]: Created slice kubepods-burstable-pod5c9ccdc7392d7a029cc688af21849873.slice - libcontainer container kubepods-burstable-pod5c9ccdc7392d7a029cc688af21849873.slice. Feb 13 21:24:24.446303 systemd[1]: Created slice kubepods-burstable-pod13df9affc95d5cd226ce646b303faf31.slice - libcontainer container kubepods-burstable-pod13df9affc95d5cd226ce646b303faf31.slice. Feb 13 21:24:24.455683 systemd[1]: Created slice kubepods-burstable-podad7b55d46ac2c7f319026ac5d531604c.slice - libcontainer container kubepods-burstable-podad7b55d46ac2c7f319026ac5d531604c.slice. Feb 13 21:24:24.584012 kubelet[2647]: I0213 21:24:24.583932 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c9ccdc7392d7a029cc688af21849873-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"5c9ccdc7392d7a029cc688af21849873\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584319 kubelet[2647]: I0213 21:24:24.584035 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c9ccdc7392d7a029cc688af21849873-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"5c9ccdc7392d7a029cc688af21849873\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584319 kubelet[2647]: I0213 21:24:24.584123 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584319 kubelet[2647]: I0213 21:24:24.584175 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad7b55d46ac2c7f319026ac5d531604c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"ad7b55d46ac2c7f319026ac5d531604c\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584319 kubelet[2647]: I0213 21:24:24.584222 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c9ccdc7392d7a029cc688af21849873-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"5c9ccdc7392d7a029cc688af21849873\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584319 kubelet[2647]: I0213 21:24:24.584273 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584833 kubelet[2647]: I0213 21:24:24.584322 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584833 kubelet[2647]: I0213 21:24:24.584381 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.584833 kubelet[2647]: I0213 21:24:24.584431 2647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.591177 kubelet[2647]: I0213 21:24:24.591127 2647 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.591904 kubelet[2647]: E0213 21:24:24.591837 2647 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.180.221:6443/api/v1/nodes\": dial tcp 147.28.180.221:6443: connect: connection refused" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.683291 kubelet[2647]: E0213 21:24:24.683162 2647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-e8b80a8c0e?timeout=10s\": dial tcp 147.28.180.221:6443: connect: connection refused" interval="800ms" Feb 13 21:24:24.739444 containerd[1824]: time="2025-02-13T21:24:24.739211350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-e8b80a8c0e,Uid:5c9ccdc7392d7a029cc688af21849873,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:24.752556 containerd[1824]: time="2025-02-13T21:24:24.752461493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e,Uid:13df9affc95d5cd226ce646b303faf31,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:24.761026 containerd[1824]: time="2025-02-13T21:24:24.761012103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-e8b80a8c0e,Uid:ad7b55d46ac2c7f319026ac5d531604c,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:24.996151 kubelet[2647]: W0213 21:24:24.995850 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:24.996151 kubelet[2647]: E0213 21:24:24.996001 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.180.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:24.997183 kubelet[2647]: I0213 21:24:24.996960 2647 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:24.997736 kubelet[2647]: E0213 21:24:24.997629 2647 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.180.221:6443/api/v1/nodes\": dial tcp 147.28.180.221:6443: connect: connection refused" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:25.195659 kubelet[2647]: W0213 21:24:25.195589 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:25.195659 kubelet[2647]: E0213 21:24:25.195645 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.180.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:25.226806 kubelet[2647]: W0213 21:24:25.226704 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-e8b80a8c0e&limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:25.226806 kubelet[2647]: E0213 21:24:25.226786 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-e8b80a8c0e&limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:25.235483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186843822.mount: Deactivated successfully. Feb 13 21:24:25.237123 containerd[1824]: time="2025-02-13T21:24:25.237054232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:25.237347 containerd[1824]: time="2025-02-13T21:24:25.237326045Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:24:25.237932 containerd[1824]: time="2025-02-13T21:24:25.237893792Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:25.238364 containerd[1824]: time="2025-02-13T21:24:25.238331142Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:25.238397 containerd[1824]: time="2025-02-13T21:24:25.238359206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:24:25.238893 containerd[1824]: time="2025-02-13T21:24:25.238876093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 21:24:25.239094 containerd[1824]: time="2025-02-13T21:24:25.239081571Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:25.241424 containerd[1824]: time="2025-02-13T21:24:25.241409804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.35315ms" Feb 13 21:24:25.242234 containerd[1824]: time="2025-02-13T21:24:25.242193394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:25.242729 containerd[1824]: time="2025-02-13T21:24:25.242685799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.068294ms" Feb 13 21:24:25.244288 containerd[1824]: time="2025-02-13T21:24:25.244251848Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 504.870537ms" Feb 13 21:24:25.334874 kubelet[2647]: W0213 21:24:25.334804 2647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.221:6443: connect: connection refused Feb 13 21:24:25.334874 kubelet[2647]: E0213 21:24:25.334834 2647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.180.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.180.221:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:25.339159 containerd[1824]: time="2025-02-13T21:24:25.338913223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:25.339159 containerd[1824]: time="2025-02-13T21:24:25.339139847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:25.339159 containerd[1824]: time="2025-02-13T21:24:25.339147575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:25.339159 containerd[1824]: time="2025-02-13T21:24:25.339127195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:25.339159 containerd[1824]: time="2025-02-13T21:24:25.339153910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:25.339159 containerd[1824]: time="2025-02-13T21:24:25.339160994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:25.339337 containerd[1824]: time="2025-02-13T21:24:25.339206746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:25.339337 containerd[1824]: time="2025-02-13T21:24:25.339232738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:25.339337 containerd[1824]: time="2025-02-13T21:24:25.339239528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:25.339337 containerd[1824]: time="2025-02-13T21:24:25.339294493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:25.339337 containerd[1824]: time="2025-02-13T21:24:25.339311973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:25.339532 containerd[1824]: time="2025-02-13T21:24:25.339514563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:25.362304 systemd[1]: Started cri-containerd-7ecfaf5c6fbb87bff91534cec997e5de625cbc774cc97d12e258b9b7273f8640.scope - libcontainer container 7ecfaf5c6fbb87bff91534cec997e5de625cbc774cc97d12e258b9b7273f8640. Feb 13 21:24:25.363131 systemd[1]: Started cri-containerd-83c98dd4503dd94df1fd8e5e30302b452c0b06d582b4641bf78e0a879867baf4.scope - libcontainer container 83c98dd4503dd94df1fd8e5e30302b452c0b06d582b4641bf78e0a879867baf4. Feb 13 21:24:25.363869 systemd[1]: Started cri-containerd-f98e2123a03a6bfeb07baaac6a13d4eb65927e6b0f85134808c4d90b0258eb0d.scope - libcontainer container f98e2123a03a6bfeb07baaac6a13d4eb65927e6b0f85134808c4d90b0258eb0d. Feb 13 21:24:25.387904 containerd[1824]: time="2025-02-13T21:24:25.387878765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-e8b80a8c0e,Uid:ad7b55d46ac2c7f319026ac5d531604c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ecfaf5c6fbb87bff91534cec997e5de625cbc774cc97d12e258b9b7273f8640\"" Feb 13 21:24:25.388581 containerd[1824]: time="2025-02-13T21:24:25.388561237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e,Uid:13df9affc95d5cd226ce646b303faf31,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c98dd4503dd94df1fd8e5e30302b452c0b06d582b4641bf78e0a879867baf4\"" Feb 13 21:24:25.389424 containerd[1824]: time="2025-02-13T21:24:25.389402805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-e8b80a8c0e,Uid:5c9ccdc7392d7a029cc688af21849873,Namespace:kube-system,Attempt:0,} returns sandbox id \"f98e2123a03a6bfeb07baaac6a13d4eb65927e6b0f85134808c4d90b0258eb0d\"" Feb 13 21:24:25.389710 containerd[1824]: time="2025-02-13T21:24:25.389692958Z" level=info msg="CreateContainer within sandbox \"83c98dd4503dd94df1fd8e5e30302b452c0b06d582b4641bf78e0a879867baf4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 21:24:25.389745 containerd[1824]: time="2025-02-13T21:24:25.389733730Z" level=info msg="CreateContainer within sandbox \"7ecfaf5c6fbb87bff91534cec997e5de625cbc774cc97d12e258b9b7273f8640\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 21:24:25.390465 containerd[1824]: time="2025-02-13T21:24:25.390450860Z" level=info msg="CreateContainer within sandbox \"f98e2123a03a6bfeb07baaac6a13d4eb65927e6b0f85134808c4d90b0258eb0d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 21:24:25.397192 containerd[1824]: time="2025-02-13T21:24:25.397169363Z" level=info msg="CreateContainer within sandbox \"83c98dd4503dd94df1fd8e5e30302b452c0b06d582b4641bf78e0a879867baf4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40d545f3372883e8e49beb29ccbd59e54036468c69b8a70e3fd966cf7f74a33e\"" Feb 13 21:24:25.397357 containerd[1824]: time="2025-02-13T21:24:25.397343157Z" level=info msg="CreateContainer within sandbox \"7ecfaf5c6fbb87bff91534cec997e5de625cbc774cc97d12e258b9b7273f8640\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"293b517f6fbc04dcc4301bc1d2967644cdfd7db0cfd531ba01d70730c7080351\"" Feb 13 21:24:25.397421 containerd[1824]: time="2025-02-13T21:24:25.397408340Z" level=info msg="StartContainer for \"40d545f3372883e8e49beb29ccbd59e54036468c69b8a70e3fd966cf7f74a33e\"" Feb 13 21:24:25.397507 containerd[1824]: time="2025-02-13T21:24:25.397493007Z" level=info msg="StartContainer for \"293b517f6fbc04dcc4301bc1d2967644cdfd7db0cfd531ba01d70730c7080351\"" Feb 13 21:24:25.397861 containerd[1824]: time="2025-02-13T21:24:25.397846732Z" level=info msg="CreateContainer within sandbox \"f98e2123a03a6bfeb07baaac6a13d4eb65927e6b0f85134808c4d90b0258eb0d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3a87f8ca61c0479284716c8e38763a42743836fba8c810040ddd431c8ce25f9\"" Feb 13 21:24:25.398001 containerd[1824]: time="2025-02-13T21:24:25.397990358Z" level=info msg="StartContainer for \"b3a87f8ca61c0479284716c8e38763a42743836fba8c810040ddd431c8ce25f9\"" Feb 13 21:24:25.425402 systemd[1]: Started cri-containerd-293b517f6fbc04dcc4301bc1d2967644cdfd7db0cfd531ba01d70730c7080351.scope - libcontainer container 293b517f6fbc04dcc4301bc1d2967644cdfd7db0cfd531ba01d70730c7080351. Feb 13 21:24:25.425964 systemd[1]: Started cri-containerd-40d545f3372883e8e49beb29ccbd59e54036468c69b8a70e3fd966cf7f74a33e.scope - libcontainer container 40d545f3372883e8e49beb29ccbd59e54036468c69b8a70e3fd966cf7f74a33e. Feb 13 21:24:25.426513 systemd[1]: Started cri-containerd-b3a87f8ca61c0479284716c8e38763a42743836fba8c810040ddd431c8ce25f9.scope - libcontainer container b3a87f8ca61c0479284716c8e38763a42743836fba8c810040ddd431c8ce25f9. Feb 13 21:24:25.449027 containerd[1824]: time="2025-02-13T21:24:25.449000546Z" level=info msg="StartContainer for \"b3a87f8ca61c0479284716c8e38763a42743836fba8c810040ddd431c8ce25f9\" returns successfully" Feb 13 21:24:25.449126 containerd[1824]: time="2025-02-13T21:24:25.449086310Z" level=info msg="StartContainer for \"293b517f6fbc04dcc4301bc1d2967644cdfd7db0cfd531ba01d70730c7080351\" returns successfully" Feb 13 21:24:25.449579 containerd[1824]: time="2025-02-13T21:24:25.449567157Z" level=info msg="StartContainer for \"40d545f3372883e8e49beb29ccbd59e54036468c69b8a70e3fd966cf7f74a33e\" returns successfully" Feb 13 21:24:25.799258 kubelet[2647]: I0213 21:24:25.799192 2647 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:26.128985 kubelet[2647]: E0213 21:24:26.128900 2647 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-a-e8b80a8c0e\" not found" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:26.231752 kubelet[2647]: I0213 21:24:26.231732 2647 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:26.231752 kubelet[2647]: E0213 21:24:26.231754 2647 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.1-a-e8b80a8c0e\": node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:26.236558 kubelet[2647]: E0213 21:24:26.236515 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:26.337530 kubelet[2647]: E0213 21:24:26.337397 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:26.438656 kubelet[2647]: E0213 21:24:26.438541 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:26.538942 kubelet[2647]: E0213 21:24:26.538815 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:26.639762 kubelet[2647]: E0213 21:24:26.639642 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:26.740505 kubelet[2647]: E0213 21:24:26.740274 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:26.841548 kubelet[2647]: E0213 21:24:26.841431 2647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:27.075548 kubelet[2647]: I0213 21:24:27.075328 2647 apiserver.go:52] "Watching apiserver" Feb 13 21:24:27.081555 kubelet[2647]: I0213 21:24:27.081462 2647 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 21:24:27.112406 kubelet[2647]: E0213 21:24:27.112297 2647 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:27.112576 kubelet[2647]: E0213 21:24:27.112400 2647 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:28.826880 systemd[1]: Reloading requested from client PID 2962 ('systemctl') (unit session-11.scope)... Feb 13 21:24:28.826891 systemd[1]: Reloading... Feb 13 21:24:28.871163 zram_generator::config[3001]: No configuration found. Feb 13 21:24:28.946340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:24:29.015530 systemd[1]: Reloading finished in 188 ms. Feb 13 21:24:29.054292 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:29.059967 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 21:24:29.060072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:29.074445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:29.308528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:29.310854 (kubelet)[3065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 21:24:29.329676 kubelet[3065]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:29.329676 kubelet[3065]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 21:24:29.329676 kubelet[3065]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:29.329909 kubelet[3065]: I0213 21:24:29.329683 3065 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 21:24:29.332770 kubelet[3065]: I0213 21:24:29.332730 3065 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 21:24:29.332770 kubelet[3065]: I0213 21:24:29.332741 3065 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 21:24:29.332890 kubelet[3065]: I0213 21:24:29.332860 3065 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 21:24:29.334755 kubelet[3065]: I0213 21:24:29.334717 3065 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 21:24:29.335748 kubelet[3065]: I0213 21:24:29.335739 3065 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:24:29.337137 kubelet[3065]: E0213 21:24:29.337122 3065 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 21:24:29.337171 kubelet[3065]: I0213 21:24:29.337137 3065 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 21:24:29.343602 kubelet[3065]: I0213 21:24:29.343560 3065 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 21:24:29.343638 kubelet[3065]: I0213 21:24:29.343609 3065 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 21:24:29.343709 kubelet[3065]: I0213 21:24:29.343665 3065 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 21:24:29.343803 kubelet[3065]: I0213 21:24:29.343679 3065 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-e8b80a8c0e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 21:24:29.343803 kubelet[3065]: I0213 21:24:29.343771 3065 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 21:24:29.343803 kubelet[3065]: I0213 21:24:29.343776 3065 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 21:24:29.343803 kubelet[3065]: I0213 21:24:29.343795 3065 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:29.343914 kubelet[3065]: I0213 21:24:29.343861 3065 kubelet.go:408] "Attempting to sync node with API server" Feb 13 21:24:29.343914 kubelet[3065]: I0213 21:24:29.343868 3065 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 21:24:29.343914 kubelet[3065]: I0213 21:24:29.343882 3065 kubelet.go:314] "Adding apiserver pod source" Feb 13 21:24:29.343914 kubelet[3065]: I0213 21:24:29.343889 3065 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 21:24:29.344641 kubelet[3065]: I0213 21:24:29.344473 3065 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 21:24:29.345006 kubelet[3065]: I0213 21:24:29.344997 3065 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 21:24:29.345237 kubelet[3065]: I0213 21:24:29.345228 3065 server.go:1269] "Started kubelet" Feb 13 21:24:29.345290 kubelet[3065]: I0213 21:24:29.345263 3065 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 21:24:29.345325 kubelet[3065]: I0213 21:24:29.345277 3065 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 21:24:29.345439 kubelet[3065]: I0213 21:24:29.345430 3065 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 21:24:29.345835 kubelet[3065]: I0213 21:24:29.345827 3065 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 21:24:29.345882 kubelet[3065]: I0213 21:24:29.345860 3065 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 21:24:29.345927 kubelet[3065]: E0213 21:24:29.345913 3065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-e8b80a8c0e\" not found" Feb 13 21:24:29.345927 kubelet[3065]: I0213 21:24:29.345919 3065 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 21:24:29.345973 kubelet[3065]: I0213 21:24:29.345940 3065 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 21:24:29.346095 kubelet[3065]: I0213 21:24:29.346083 3065 reconciler.go:26] "Reconciler: start to sync state" Feb 13 21:24:29.346165 kubelet[3065]: I0213 21:24:29.346150 3065 factory.go:221] Registration of the systemd container factory successfully Feb 13 21:24:29.346240 kubelet[3065]: E0213 21:24:29.346175 3065 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 21:24:29.346291 kubelet[3065]: I0213 21:24:29.346275 3065 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 21:24:29.346395 kubelet[3065]: I0213 21:24:29.346385 3065 server.go:460] "Adding debug handlers to kubelet server" Feb 13 21:24:29.347284 kubelet[3065]: I0213 21:24:29.347271 3065 factory.go:221] Registration of the containerd container factory successfully Feb 13 21:24:29.351128 kubelet[3065]: I0213 21:24:29.351110 3065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 21:24:29.351650 kubelet[3065]: I0213 21:24:29.351641 3065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 21:24:29.351695 kubelet[3065]: I0213 21:24:29.351657 3065 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 21:24:29.351695 kubelet[3065]: I0213 21:24:29.351666 3065 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 21:24:29.351695 kubelet[3065]: E0213 21:24:29.351687 3065 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 21:24:29.362138 kubelet[3065]: I0213 21:24:29.362122 3065 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 21:24:29.362138 kubelet[3065]: I0213 21:24:29.362134 3065 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 21:24:29.362237 kubelet[3065]: I0213 21:24:29.362146 3065 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:29.362237 kubelet[3065]: I0213 21:24:29.362231 3065 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 21:24:29.362272 kubelet[3065]: I0213 21:24:29.362237 3065 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 21:24:29.362272 kubelet[3065]: I0213 21:24:29.362249 3065 policy_none.go:49] "None policy: Start" Feb 13 21:24:29.362544 kubelet[3065]: I0213 21:24:29.362535 3065 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 21:24:29.362574 kubelet[3065]: I0213 21:24:29.362547 3065 state_mem.go:35] "Initializing new in-memory state store" Feb 13 21:24:29.362675 kubelet[3065]: I0213 21:24:29.362645 3065 state_mem.go:75] "Updated machine memory state" Feb 13 21:24:29.364553 kubelet[3065]: I0213 21:24:29.364517 3065 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 21:24:29.364636 kubelet[3065]: I0213 21:24:29.364597 3065 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 21:24:29.364636 kubelet[3065]: I0213 21:24:29.364604 3065 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 21:24:29.364749 kubelet[3065]: I0213 21:24:29.364695 3065 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 21:24:29.460900 kubelet[3065]: W0213 21:24:29.460841 3065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:29.460900 kubelet[3065]: W0213 21:24:29.460873 3065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:29.460900 kubelet[3065]: W0213 21:24:29.460898 3065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:29.471931 kubelet[3065]: I0213 21:24:29.471873 3065 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.481077 kubelet[3065]: I0213 21:24:29.481022 3065 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.481310 kubelet[3065]: I0213 21:24:29.481217 3065 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647362 kubelet[3065]: I0213 21:24:29.647118 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c9ccdc7392d7a029cc688af21849873-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"5c9ccdc7392d7a029cc688af21849873\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647362 kubelet[3065]: I0213 21:24:29.647238 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c9ccdc7392d7a029cc688af21849873-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"5c9ccdc7392d7a029cc688af21849873\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647362 kubelet[3065]: I0213 21:24:29.647310 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647803 kubelet[3065]: I0213 21:24:29.647391 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647803 kubelet[3065]: I0213 21:24:29.647507 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647803 kubelet[3065]: I0213 21:24:29.647595 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad7b55d46ac2c7f319026ac5d531604c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"ad7b55d46ac2c7f319026ac5d531604c\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647803 kubelet[3065]: I0213 21:24:29.647663 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c9ccdc7392d7a029cc688af21849873-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"5c9ccdc7392d7a029cc688af21849873\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.647803 kubelet[3065]: I0213 21:24:29.647713 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:29.648309 kubelet[3065]: I0213 21:24:29.647807 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13df9affc95d5cd226ce646b303faf31-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" (UID: \"13df9affc95d5cd226ce646b303faf31\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:30.344267 kubelet[3065]: I0213 21:24:30.344204 3065 apiserver.go:52] "Watching apiserver" Feb 13 21:24:30.366178 kubelet[3065]: W0213 21:24:30.366036 3065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:30.366453 kubelet[3065]: W0213 21:24:30.366328 3065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:30.366752 kubelet[3065]: E0213 21:24:30.366331 3065 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-e8b80a8c0e\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:30.366752 kubelet[3065]: E0213 21:24:30.366537 3065 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:30.402206 kubelet[3065]: I0213 21:24:30.402065 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-e8b80a8c0e" podStartSLOduration=1.402034912 podStartE2EDuration="1.402034912s" podCreationTimestamp="2025-02-13 21:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:30.402036632 +0000 UTC m=+1.089219665" watchObservedRunningTime="2025-02-13 21:24:30.402034912 +0000 UTC m=+1.089217935" Feb 13 21:24:30.415655 kubelet[3065]: I0213 21:24:30.415583 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-e8b80a8c0e" podStartSLOduration=1.415567069 podStartE2EDuration="1.415567069s" podCreationTimestamp="2025-02-13 21:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:30.409530348 +0000 UTC m=+1.096713382" watchObservedRunningTime="2025-02-13 21:24:30.415567069 +0000 UTC m=+1.102750097" Feb 13 21:24:30.415766 kubelet[3065]: I0213 21:24:30.415693 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-e8b80a8c0e" podStartSLOduration=1.415686842 podStartE2EDuration="1.415686842s" podCreationTimestamp="2025-02-13 21:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:30.41563529 +0000 UTC m=+1.102818314" watchObservedRunningTime="2025-02-13 21:24:30.415686842 +0000 UTC m=+1.102869859" Feb 13 21:24:30.447293 kubelet[3065]: I0213 21:24:30.447199 3065 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 21:24:33.580699 sudo[2095]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:33.581557 sshd[2092]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:33.583090 kubelet[3065]: I0213 21:24:33.583077 3065 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 21:24:33.583304 containerd[1824]: time="2025-02-13T21:24:33.583254165Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 21:24:33.583448 kubelet[3065]: I0213 21:24:33.583347 3065 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 21:24:33.583377 systemd[1]: sshd@8-147.28.180.221:22-139.178.89.65:43788.service: Deactivated successfully. Feb 13 21:24:33.584228 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 21:24:33.584321 systemd[1]: session-11.scope: Consumed 3.404s CPU time, 164.7M memory peak, 0B memory swap peak. Feb 13 21:24:33.584579 systemd-logind[1806]: Session 11 logged out. Waiting for processes to exit. Feb 13 21:24:33.585002 systemd-logind[1806]: Removed session 11. Feb 13 21:24:34.517186 systemd[1]: Created slice kubepods-besteffort-pod8cf36142_09da_4f53_b0dc_e68541f3c05e.slice - libcontainer container kubepods-besteffort-pod8cf36142_09da_4f53_b0dc_e68541f3c05e.slice. Feb 13 21:24:34.584070 kubelet[3065]: I0213 21:24:34.583955 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8cf36142-09da-4f53-b0dc-e68541f3c05e-kube-proxy\") pod \"kube-proxy-qz65w\" (UID: \"8cf36142-09da-4f53-b0dc-e68541f3c05e\") " pod="kube-system/kube-proxy-qz65w" Feb 13 21:24:34.584070 kubelet[3065]: I0213 21:24:34.584055 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cf36142-09da-4f53-b0dc-e68541f3c05e-xtables-lock\") pod \"kube-proxy-qz65w\" (UID: \"8cf36142-09da-4f53-b0dc-e68541f3c05e\") " pod="kube-system/kube-proxy-qz65w" Feb 13 21:24:34.585013 kubelet[3065]: I0213 21:24:34.584135 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cf36142-09da-4f53-b0dc-e68541f3c05e-lib-modules\") pod \"kube-proxy-qz65w\" (UID: \"8cf36142-09da-4f53-b0dc-e68541f3c05e\") " pod="kube-system/kube-proxy-qz65w" Feb 13 21:24:34.585013 kubelet[3065]: I0213 21:24:34.584200 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhlp2\" (UniqueName: \"kubernetes.io/projected/8cf36142-09da-4f53-b0dc-e68541f3c05e-kube-api-access-hhlp2\") pod \"kube-proxy-qz65w\" (UID: \"8cf36142-09da-4f53-b0dc-e68541f3c05e\") " pod="kube-system/kube-proxy-qz65w" Feb 13 21:24:34.730020 systemd[1]: Created slice kubepods-besteffort-pode7f9dc90_3777_4f0f_b467_d976a432ace4.slice - libcontainer container kubepods-besteffort-pode7f9dc90_3777_4f0f_b467_d976a432ace4.slice. Feb 13 21:24:34.785742 kubelet[3065]: I0213 21:24:34.785517 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxlq\" (UniqueName: \"kubernetes.io/projected/e7f9dc90-3777-4f0f-b467-d976a432ace4-kube-api-access-fpxlq\") pod \"tigera-operator-76c4976dd7-hllfk\" (UID: \"e7f9dc90-3777-4f0f-b467-d976a432ace4\") " pod="tigera-operator/tigera-operator-76c4976dd7-hllfk" Feb 13 21:24:34.785742 kubelet[3065]: I0213 21:24:34.785708 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7f9dc90-3777-4f0f-b467-d976a432ace4-var-lib-calico\") pod \"tigera-operator-76c4976dd7-hllfk\" (UID: \"e7f9dc90-3777-4f0f-b467-d976a432ace4\") " pod="tigera-operator/tigera-operator-76c4976dd7-hllfk" Feb 13 21:24:34.835151 containerd[1824]: time="2025-02-13T21:24:34.834996717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz65w,Uid:8cf36142-09da-4f53-b0dc-e68541f3c05e,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:34.847344 containerd[1824]: time="2025-02-13T21:24:34.847272354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:34.847344 containerd[1824]: time="2025-02-13T21:24:34.847305078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:34.847344 containerd[1824]: time="2025-02-13T21:24:34.847312421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:34.847451 containerd[1824]: time="2025-02-13T21:24:34.847356687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:34.872348 systemd[1]: Started cri-containerd-fb150e5c579bdc6402e7cccb1c83229d75c0e1400aa2a5d167140e91e5677f16.scope - libcontainer container fb150e5c579bdc6402e7cccb1c83229d75c0e1400aa2a5d167140e91e5677f16. Feb 13 21:24:34.882924 containerd[1824]: time="2025-02-13T21:24:34.882876701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz65w,Uid:8cf36142-09da-4f53-b0dc-e68541f3c05e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb150e5c579bdc6402e7cccb1c83229d75c0e1400aa2a5d167140e91e5677f16\"" Feb 13 21:24:34.884355 containerd[1824]: time="2025-02-13T21:24:34.884336333Z" level=info msg="CreateContainer within sandbox \"fb150e5c579bdc6402e7cccb1c83229d75c0e1400aa2a5d167140e91e5677f16\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 21:24:34.891326 containerd[1824]: time="2025-02-13T21:24:34.891280194Z" level=info msg="CreateContainer within sandbox \"fb150e5c579bdc6402e7cccb1c83229d75c0e1400aa2a5d167140e91e5677f16\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5053fbd7f705b80cebf03337fbfc80631c4f2fbd07015eaf8a6206255102169\"" Feb 13 21:24:34.891586 containerd[1824]: time="2025-02-13T21:24:34.891552134Z" level=info msg="StartContainer for \"c5053fbd7f705b80cebf03337fbfc80631c4f2fbd07015eaf8a6206255102169\"" Feb 13 21:24:34.921604 systemd[1]: Started cri-containerd-c5053fbd7f705b80cebf03337fbfc80631c4f2fbd07015eaf8a6206255102169.scope - libcontainer container c5053fbd7f705b80cebf03337fbfc80631c4f2fbd07015eaf8a6206255102169. Feb 13 21:24:34.981234 containerd[1824]: time="2025-02-13T21:24:34.981189934Z" level=info msg="StartContainer for \"c5053fbd7f705b80cebf03337fbfc80631c4f2fbd07015eaf8a6206255102169\" returns successfully" Feb 13 21:24:35.034381 containerd[1824]: time="2025-02-13T21:24:35.034288606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hllfk,Uid:e7f9dc90-3777-4f0f-b467-d976a432ace4,Namespace:tigera-operator,Attempt:0,}" Feb 13 21:24:35.044391 containerd[1824]: time="2025-02-13T21:24:35.044250472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:35.044527 containerd[1824]: time="2025-02-13T21:24:35.044302750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:35.044527 containerd[1824]: time="2025-02-13T21:24:35.044493896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.044595 containerd[1824]: time="2025-02-13T21:24:35.044536884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.066442 systemd[1]: Started cri-containerd-1349e9c6d839dfa9c37d746279299d8b72793798709810ee5c884c79d1af63d3.scope - libcontainer container 1349e9c6d839dfa9c37d746279299d8b72793798709810ee5c884c79d1af63d3. Feb 13 21:24:35.088082 containerd[1824]: time="2025-02-13T21:24:35.088056977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-hllfk,Uid:e7f9dc90-3777-4f0f-b467-d976a432ace4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1349e9c6d839dfa9c37d746279299d8b72793798709810ee5c884c79d1af63d3\"" Feb 13 21:24:35.088831 containerd[1824]: time="2025-02-13T21:24:35.088818702Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 21:24:35.383667 kubelet[3065]: I0213 21:24:35.383442 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qz65w" podStartSLOduration=1.383408217 podStartE2EDuration="1.383408217s" podCreationTimestamp="2025-02-13 21:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:35.383361024 +0000 UTC m=+6.070544081" watchObservedRunningTime="2025-02-13 21:24:35.383408217 +0000 UTC m=+6.070591283" Feb 13 21:24:36.581931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791780544.mount: Deactivated successfully. Feb 13 21:24:36.783298 containerd[1824]: time="2025-02-13T21:24:36.783273388Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:36.783498 containerd[1824]: time="2025-02-13T21:24:36.783478285Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 21:24:36.783763 containerd[1824]: time="2025-02-13T21:24:36.783752393Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:36.785151 containerd[1824]: time="2025-02-13T21:24:36.785127277Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:36.785504 containerd[1824]: time="2025-02-13T21:24:36.785463437Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.696626578s" Feb 13 21:24:36.785504 containerd[1824]: time="2025-02-13T21:24:36.785479230Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 21:24:36.786403 containerd[1824]: time="2025-02-13T21:24:36.786390506Z" level=info msg="CreateContainer within sandbox \"1349e9c6d839dfa9c37d746279299d8b72793798709810ee5c884c79d1af63d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 21:24:36.789966 containerd[1824]: time="2025-02-13T21:24:36.789924744Z" level=info msg="CreateContainer within sandbox \"1349e9c6d839dfa9c37d746279299d8b72793798709810ee5c884c79d1af63d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ac23f60df6fe361ae0da34617e9fba99664d8712f0224550ad95acce47278a38\"" Feb 13 21:24:36.790141 containerd[1824]: time="2025-02-13T21:24:36.790129150Z" level=info msg="StartContainer for \"ac23f60df6fe361ae0da34617e9fba99664d8712f0224550ad95acce47278a38\"" Feb 13 21:24:36.814390 systemd[1]: Started cri-containerd-ac23f60df6fe361ae0da34617e9fba99664d8712f0224550ad95acce47278a38.scope - libcontainer container ac23f60df6fe361ae0da34617e9fba99664d8712f0224550ad95acce47278a38. Feb 13 21:24:36.825759 containerd[1824]: time="2025-02-13T21:24:36.825733735Z" level=info msg="StartContainer for \"ac23f60df6fe361ae0da34617e9fba99664d8712f0224550ad95acce47278a38\" returns successfully" Feb 13 21:24:37.395837 kubelet[3065]: I0213 21:24:37.395789 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-hllfk" podStartSLOduration=1.69850258 podStartE2EDuration="3.395755986s" podCreationTimestamp="2025-02-13 21:24:34 +0000 UTC" firstStartedPulling="2025-02-13 21:24:35.08860885 +0000 UTC m=+5.775791859" lastFinishedPulling="2025-02-13 21:24:36.785862258 +0000 UTC m=+7.473045265" observedRunningTime="2025-02-13 21:24:37.395676894 +0000 UTC m=+8.082859901" watchObservedRunningTime="2025-02-13 21:24:37.395755986 +0000 UTC m=+8.082938992" Feb 13 21:24:39.839365 systemd[1]: Created slice kubepods-besteffort-pode1f69b13_b780_4899_86a7_24a3805ad3b1.slice - libcontainer container kubepods-besteffort-pode1f69b13_b780_4899_86a7_24a3805ad3b1.slice. Feb 13 21:24:39.851314 systemd[1]: Created slice kubepods-besteffort-pod0a779ef4_9b40_4afc_a777_c841c4123db1.slice - libcontainer container kubepods-besteffort-pod0a779ef4_9b40_4afc_a777_c841c4123db1.slice. Feb 13 21:24:39.872738 kubelet[3065]: E0213 21:24:39.872703 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w68mc" podUID="439af64d-2df4-4687-a7d1-8e9f8ba04da6" Feb 13 21:24:39.924115 kubelet[3065]: I0213 21:24:39.924088 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-cni-log-dir\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924115 kubelet[3065]: I0213 21:24:39.924121 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-policysync\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924226 kubelet[3065]: I0213 21:24:39.924135 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd77x\" (UniqueName: \"kubernetes.io/projected/0a779ef4-9b40-4afc-a777-c841c4123db1-kube-api-access-rd77x\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924226 kubelet[3065]: I0213 21:24:39.924157 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-lib-modules\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924226 kubelet[3065]: I0213 21:24:39.924166 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-xtables-lock\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924226 kubelet[3065]: I0213 21:24:39.924175 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-flexvol-driver-host\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924226 kubelet[3065]: I0213 21:24:39.924200 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/439af64d-2df4-4687-a7d1-8e9f8ba04da6-registration-dir\") pod \"csi-node-driver-w68mc\" (UID: \"439af64d-2df4-4687-a7d1-8e9f8ba04da6\") " pod="calico-system/csi-node-driver-w68mc" Feb 13 21:24:39.924311 kubelet[3065]: I0213 21:24:39.924235 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a779ef4-9b40-4afc-a777-c841c4123db1-node-certs\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924311 kubelet[3065]: I0213 21:24:39.924250 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-var-run-calico\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924311 kubelet[3065]: I0213 21:24:39.924259 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-cni-bin-dir\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924311 kubelet[3065]: I0213 21:24:39.924268 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/439af64d-2df4-4687-a7d1-8e9f8ba04da6-kubelet-dir\") pod \"csi-node-driver-w68mc\" (UID: \"439af64d-2df4-4687-a7d1-8e9f8ba04da6\") " pod="calico-system/csi-node-driver-w68mc" Feb 13 21:24:39.924311 kubelet[3065]: I0213 21:24:39.924279 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqj5\" (UniqueName: \"kubernetes.io/projected/e1f69b13-b780-4899-86a7-24a3805ad3b1-kube-api-access-4cqj5\") pod \"calico-typha-5558fbf56b-mvkfw\" (UID: \"e1f69b13-b780-4899-86a7-24a3805ad3b1\") " pod="calico-system/calico-typha-5558fbf56b-mvkfw" Feb 13 21:24:39.924392 kubelet[3065]: I0213 21:24:39.924291 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/439af64d-2df4-4687-a7d1-8e9f8ba04da6-varrun\") pod \"csi-node-driver-w68mc\" (UID: \"439af64d-2df4-4687-a7d1-8e9f8ba04da6\") " pod="calico-system/csi-node-driver-w68mc" Feb 13 21:24:39.924392 kubelet[3065]: I0213 21:24:39.924300 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-cni-net-dir\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924392 kubelet[3065]: I0213 21:24:39.924308 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjlx4\" (UniqueName: \"kubernetes.io/projected/439af64d-2df4-4687-a7d1-8e9f8ba04da6-kube-api-access-mjlx4\") pod \"csi-node-driver-w68mc\" (UID: \"439af64d-2df4-4687-a7d1-8e9f8ba04da6\") " pod="calico-system/csi-node-driver-w68mc" Feb 13 21:24:39.924392 kubelet[3065]: I0213 21:24:39.924317 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f69b13-b780-4899-86a7-24a3805ad3b1-tigera-ca-bundle\") pod \"calico-typha-5558fbf56b-mvkfw\" (UID: \"e1f69b13-b780-4899-86a7-24a3805ad3b1\") " pod="calico-system/calico-typha-5558fbf56b-mvkfw" Feb 13 21:24:39.924392 kubelet[3065]: I0213 21:24:39.924331 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a779ef4-9b40-4afc-a777-c841c4123db1-tigera-ca-bundle\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924473 kubelet[3065]: I0213 21:24:39.924349 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a779ef4-9b40-4afc-a777-c841c4123db1-var-lib-calico\") pod \"calico-node-j5hhk\" (UID: \"0a779ef4-9b40-4afc-a777-c841c4123db1\") " pod="calico-system/calico-node-j5hhk" Feb 13 21:24:39.924473 kubelet[3065]: I0213 21:24:39.924363 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/439af64d-2df4-4687-a7d1-8e9f8ba04da6-socket-dir\") pod \"csi-node-driver-w68mc\" (UID: \"439af64d-2df4-4687-a7d1-8e9f8ba04da6\") " pod="calico-system/csi-node-driver-w68mc" Feb 13 21:24:39.924473 kubelet[3065]: I0213 21:24:39.924375 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e1f69b13-b780-4899-86a7-24a3805ad3b1-typha-certs\") pod \"calico-typha-5558fbf56b-mvkfw\" (UID: \"e1f69b13-b780-4899-86a7-24a3805ad3b1\") " pod="calico-system/calico-typha-5558fbf56b-mvkfw" Feb 13 21:24:40.028379 kubelet[3065]: E0213 21:24:40.028314 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.028379 kubelet[3065]: W0213 21:24:40.028363 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.028918 kubelet[3065]: E0213 21:24:40.028430 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.029172 kubelet[3065]: E0213 21:24:40.028982 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.029172 kubelet[3065]: W0213 21:24:40.029018 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.029172 kubelet[3065]: E0213 21:24:40.029048 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.033275 kubelet[3065]: E0213 21:24:40.033224 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.033503 kubelet[3065]: W0213 21:24:40.033276 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.033503 kubelet[3065]: E0213 21:24:40.033329 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.033996 kubelet[3065]: E0213 21:24:40.033906 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.033996 kubelet[3065]: W0213 21:24:40.033958 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.034337 kubelet[3065]: E0213 21:24:40.034014 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.047493 kubelet[3065]: E0213 21:24:40.047388 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.047493 kubelet[3065]: W0213 21:24:40.047446 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.047821 kubelet[3065]: E0213 21:24:40.047519 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.048273 kubelet[3065]: E0213 21:24:40.048232 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.048273 kubelet[3065]: W0213 21:24:40.048265 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.048509 kubelet[3065]: E0213 21:24:40.048307 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.048914 kubelet[3065]: E0213 21:24:40.048831 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.048914 kubelet[3065]: W0213 21:24:40.048871 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.049259 kubelet[3065]: E0213 21:24:40.048919 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.146945 containerd[1824]: time="2025-02-13T21:24:40.146721975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5558fbf56b-mvkfw,Uid:e1f69b13-b780-4899-86a7-24a3805ad3b1,Namespace:calico-system,Attempt:0,}" Feb 13 21:24:40.154440 containerd[1824]: time="2025-02-13T21:24:40.154404277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5hhk,Uid:0a779ef4-9b40-4afc-a777-c841c4123db1,Namespace:calico-system,Attempt:0,}" Feb 13 21:24:40.181176 containerd[1824]: time="2025-02-13T21:24:40.181134276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:40.181176 containerd[1824]: time="2025-02-13T21:24:40.181169680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:40.181176 containerd[1824]: time="2025-02-13T21:24:40.181178520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:40.181293 containerd[1824]: time="2025-02-13T21:24:40.181181650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:40.181293 containerd[1824]: time="2025-02-13T21:24:40.181211385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:40.181293 containerd[1824]: time="2025-02-13T21:24:40.181220113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:40.181293 containerd[1824]: time="2025-02-13T21:24:40.181233484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:40.181293 containerd[1824]: time="2025-02-13T21:24:40.181260413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:40.202409 systemd[1]: Started cri-containerd-472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3.scope - libcontainer container 472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3. Feb 13 21:24:40.203261 systemd[1]: Started cri-containerd-a0b91c40919361257873616e8cd7918d699e3e599b7ab896942e93d2d7344ed7.scope - libcontainer container a0b91c40919361257873616e8cd7918d699e3e599b7ab896942e93d2d7344ed7. Feb 13 21:24:40.214902 containerd[1824]: time="2025-02-13T21:24:40.214877039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5hhk,Uid:0a779ef4-9b40-4afc-a777-c841c4123db1,Namespace:calico-system,Attempt:0,} returns sandbox id \"472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3\"" Feb 13 21:24:40.215652 containerd[1824]: time="2025-02-13T21:24:40.215636105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 21:24:40.228640 containerd[1824]: time="2025-02-13T21:24:40.228620648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5558fbf56b-mvkfw,Uid:e1f69b13-b780-4899-86a7-24a3805ad3b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0b91c40919361257873616e8cd7918d699e3e599b7ab896942e93d2d7344ed7\"" Feb 13 21:24:40.419620 kubelet[3065]: E0213 21:24:40.419519 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.419620 kubelet[3065]: W0213 21:24:40.419575 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.419971 kubelet[3065]: E0213 21:24:40.419632 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.420324 kubelet[3065]: E0213 21:24:40.420250 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.420324 kubelet[3065]: W0213 21:24:40.420287 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.420324 kubelet[3065]: E0213 21:24:40.420319 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.420965 kubelet[3065]: E0213 21:24:40.420884 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.420965 kubelet[3065]: W0213 21:24:40.420921 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.421299 kubelet[3065]: E0213 21:24:40.420963 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.421690 kubelet[3065]: E0213 21:24:40.421611 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.421690 kubelet[3065]: W0213 21:24:40.421650 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.422019 kubelet[3065]: E0213 21:24:40.421695 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.422320 kubelet[3065]: E0213 21:24:40.422249 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.422320 kubelet[3065]: W0213 21:24:40.422279 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.422320 kubelet[3065]: E0213 21:24:40.422309 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.422922 kubelet[3065]: E0213 21:24:40.422846 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.422922 kubelet[3065]: W0213 21:24:40.422885 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.422922 kubelet[3065]: E0213 21:24:40.422923 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.423575 kubelet[3065]: E0213 21:24:40.423503 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.423575 kubelet[3065]: W0213 21:24:40.423531 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.423575 kubelet[3065]: E0213 21:24:40.423559 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.424044 kubelet[3065]: E0213 21:24:40.424012 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.424044 kubelet[3065]: W0213 21:24:40.424039 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.424358 kubelet[3065]: E0213 21:24:40.424065 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.424651 kubelet[3065]: E0213 21:24:40.424578 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.424651 kubelet[3065]: W0213 21:24:40.424605 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.424651 kubelet[3065]: E0213 21:24:40.424631 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.425133 kubelet[3065]: E0213 21:24:40.425077 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.425133 kubelet[3065]: W0213 21:24:40.425128 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.425397 kubelet[3065]: E0213 21:24:40.425160 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.425719 kubelet[3065]: E0213 21:24:40.425633 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.425719 kubelet[3065]: W0213 21:24:40.425671 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.425719 kubelet[3065]: E0213 21:24:40.425705 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.426218 kubelet[3065]: E0213 21:24:40.426164 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.426218 kubelet[3065]: W0213 21:24:40.426189 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.426218 kubelet[3065]: E0213 21:24:40.426215 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.426813 kubelet[3065]: E0213 21:24:40.426730 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.426813 kubelet[3065]: W0213 21:24:40.426767 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.426813 kubelet[3065]: E0213 21:24:40.426801 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.427443 kubelet[3065]: E0213 21:24:40.427368 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.427443 kubelet[3065]: W0213 21:24:40.427412 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.427443 kubelet[3065]: E0213 21:24:40.427446 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.428041 kubelet[3065]: E0213 21:24:40.427963 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.428041 kubelet[3065]: W0213 21:24:40.427993 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.428041 kubelet[3065]: E0213 21:24:40.428022 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.428656 kubelet[3065]: E0213 21:24:40.428578 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.428656 kubelet[3065]: W0213 21:24:40.428615 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.428656 kubelet[3065]: E0213 21:24:40.428653 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.429403 kubelet[3065]: E0213 21:24:40.429326 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.429403 kubelet[3065]: W0213 21:24:40.429365 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.429403 kubelet[3065]: E0213 21:24:40.429400 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.429970 kubelet[3065]: E0213 21:24:40.429899 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.429970 kubelet[3065]: W0213 21:24:40.429929 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.429970 kubelet[3065]: E0213 21:24:40.429958 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.430523 kubelet[3065]: E0213 21:24:40.430486 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.430631 kubelet[3065]: W0213 21:24:40.430524 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.430631 kubelet[3065]: E0213 21:24:40.430560 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.431113 kubelet[3065]: E0213 21:24:40.431073 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.431253 kubelet[3065]: W0213 21:24:40.431127 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.431253 kubelet[3065]: E0213 21:24:40.431158 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.431758 kubelet[3065]: E0213 21:24:40.431723 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.431872 kubelet[3065]: W0213 21:24:40.431762 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.431872 kubelet[3065]: E0213 21:24:40.431797 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.432496 kubelet[3065]: E0213 21:24:40.432412 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.432496 kubelet[3065]: W0213 21:24:40.432447 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.432496 kubelet[3065]: E0213 21:24:40.432482 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.433086 kubelet[3065]: E0213 21:24:40.433029 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.433086 kubelet[3065]: W0213 21:24:40.433059 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.433086 kubelet[3065]: E0213 21:24:40.433088 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.433732 kubelet[3065]: E0213 21:24:40.433657 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.433732 kubelet[3065]: W0213 21:24:40.433694 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.433732 kubelet[3065]: E0213 21:24:40.433728 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:40.434395 kubelet[3065]: E0213 21:24:40.434350 3065 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:24:40.434395 kubelet[3065]: W0213 21:24:40.434387 3065 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:24:40.434675 kubelet[3065]: E0213 21:24:40.434423 3065 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:24:41.183325 update_engine[1811]: I20250213 21:24:41.183151 1811 update_attempter.cc:509] Updating boot flags... Feb 13 21:24:41.221114 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3681) Feb 13 21:24:41.248109 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3682) Feb 13 21:24:41.352700 kubelet[3065]: E0213 21:24:41.352609 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w68mc" podUID="439af64d-2df4-4687-a7d1-8e9f8ba04da6" Feb 13 21:24:41.789316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853304716.mount: Deactivated successfully. Feb 13 21:24:41.833854 containerd[1824]: time="2025-02-13T21:24:41.833792488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:41.834056 containerd[1824]: time="2025-02-13T21:24:41.833992268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 21:24:41.834324 containerd[1824]: time="2025-02-13T21:24:41.834285358Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:41.835424 containerd[1824]: time="2025-02-13T21:24:41.835384046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:41.836088 containerd[1824]: time="2025-02-13T21:24:41.836075258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.620398163s" Feb 13 21:24:41.836158 containerd[1824]: time="2025-02-13T21:24:41.836091015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 21:24:41.836684 containerd[1824]: time="2025-02-13T21:24:41.836672091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 21:24:41.837215 containerd[1824]: time="2025-02-13T21:24:41.837200224Z" level=info msg="CreateContainer within sandbox \"472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 21:24:41.842813 containerd[1824]: time="2025-02-13T21:24:41.842796026Z" level=info msg="CreateContainer within sandbox \"472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01\"" Feb 13 21:24:41.843122 containerd[1824]: time="2025-02-13T21:24:41.843110946Z" level=info msg="StartContainer for \"ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01\"" Feb 13 21:24:41.867275 systemd[1]: Started cri-containerd-ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01.scope - libcontainer container ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01. Feb 13 21:24:41.883000 containerd[1824]: time="2025-02-13T21:24:41.882974255Z" level=info msg="StartContainer for \"ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01\" returns successfully" Feb 13 21:24:41.886769 systemd[1]: cri-containerd-ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01.scope: Deactivated successfully. Feb 13 21:24:42.121705 containerd[1824]: time="2025-02-13T21:24:42.121631882Z" level=info msg="shim disconnected" id=ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01 namespace=k8s.io Feb 13 21:24:42.121705 containerd[1824]: time="2025-02-13T21:24:42.121662837Z" level=warning msg="cleaning up after shim disconnected" id=ce0244e8d5268fb38b48de3b97d1fba8be0e840de9f695fc049160e57df87e01 namespace=k8s.io Feb 13 21:24:42.121705 containerd[1824]: time="2025-02-13T21:24:42.121668129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:24:43.352258 kubelet[3065]: E0213 21:24:43.352185 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w68mc" podUID="439af64d-2df4-4687-a7d1-8e9f8ba04da6" Feb 13 21:24:43.695421 containerd[1824]: time="2025-02-13T21:24:43.695366010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:43.695639 containerd[1824]: time="2025-02-13T21:24:43.695572549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 21:24:43.696042 containerd[1824]: time="2025-02-13T21:24:43.695999883Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:43.696988 containerd[1824]: time="2025-02-13T21:24:43.696945949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:43.697377 containerd[1824]: time="2025-02-13T21:24:43.697340907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.860650781s" Feb 13 21:24:43.697377 containerd[1824]: time="2025-02-13T21:24:43.697356101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 21:24:43.697931 containerd[1824]: time="2025-02-13T21:24:43.697896435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 21:24:43.700839 containerd[1824]: time="2025-02-13T21:24:43.700788448Z" level=info msg="CreateContainer within sandbox \"a0b91c40919361257873616e8cd7918d699e3e599b7ab896942e93d2d7344ed7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 21:24:43.705513 containerd[1824]: time="2025-02-13T21:24:43.705465941Z" level=info msg="CreateContainer within sandbox \"a0b91c40919361257873616e8cd7918d699e3e599b7ab896942e93d2d7344ed7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"65bd04422fe5bb9a56c68bb16043cb7025669c014df28c5718df42233efdbebd\"" Feb 13 21:24:43.705735 containerd[1824]: time="2025-02-13T21:24:43.705686845Z" level=info msg="StartContainer for \"65bd04422fe5bb9a56c68bb16043cb7025669c014df28c5718df42233efdbebd\"" Feb 13 21:24:43.733402 systemd[1]: Started cri-containerd-65bd04422fe5bb9a56c68bb16043cb7025669c014df28c5718df42233efdbebd.scope - libcontainer container 65bd04422fe5bb9a56c68bb16043cb7025669c014df28c5718df42233efdbebd. Feb 13 21:24:43.762194 containerd[1824]: time="2025-02-13T21:24:43.762129338Z" level=info msg="StartContainer for \"65bd04422fe5bb9a56c68bb16043cb7025669c014df28c5718df42233efdbebd\" returns successfully" Feb 13 21:24:44.425494 kubelet[3065]: I0213 21:24:44.425383 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5558fbf56b-mvkfw" podStartSLOduration=1.956648109 podStartE2EDuration="5.425345628s" podCreationTimestamp="2025-02-13 21:24:39 +0000 UTC" firstStartedPulling="2025-02-13 21:24:40.229128395 +0000 UTC m=+10.916311401" lastFinishedPulling="2025-02-13 21:24:43.697825914 +0000 UTC m=+14.385008920" observedRunningTime="2025-02-13 21:24:44.425028488 +0000 UTC m=+15.112211587" watchObservedRunningTime="2025-02-13 21:24:44.425345628 +0000 UTC m=+15.112528707" Feb 13 21:24:45.352095 kubelet[3065]: E0213 21:24:45.352045 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w68mc" podUID="439af64d-2df4-4687-a7d1-8e9f8ba04da6" Feb 13 21:24:45.404809 kubelet[3065]: I0213 21:24:45.404794 3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:24:46.278778 containerd[1824]: time="2025-02-13T21:24:46.278716187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:46.278986 containerd[1824]: time="2025-02-13T21:24:46.278940804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 21:24:46.279228 containerd[1824]: time="2025-02-13T21:24:46.279188006Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:46.280284 containerd[1824]: time="2025-02-13T21:24:46.280244532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:46.280699 containerd[1824]: time="2025-02-13T21:24:46.280645676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.582735024s" Feb 13 21:24:46.280699 containerd[1824]: time="2025-02-13T21:24:46.280662109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 21:24:46.281620 containerd[1824]: time="2025-02-13T21:24:46.281571456Z" level=info msg="CreateContainer within sandbox \"472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 21:24:46.287671 containerd[1824]: time="2025-02-13T21:24:46.287629317Z" level=info msg="CreateContainer within sandbox \"472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237\"" Feb 13 21:24:46.287898 containerd[1824]: time="2025-02-13T21:24:46.287847373Z" level=info msg="StartContainer for \"afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237\"" Feb 13 21:24:46.317392 systemd[1]: Started cri-containerd-afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237.scope - libcontainer container afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237. Feb 13 21:24:46.330546 containerd[1824]: time="2025-02-13T21:24:46.330522021Z" level=info msg="StartContainer for \"afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237\" returns successfully" Feb 13 21:24:46.848484 systemd[1]: cri-containerd-afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237.scope: Deactivated successfully. Feb 13 21:24:46.919455 kubelet[3065]: I0213 21:24:46.919385 3065 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 21:24:46.977555 systemd[1]: Created slice kubepods-burstable-pod57a173d5_81af_4ca8_8cbb_e172ae7536f0.slice - libcontainer container kubepods-burstable-pod57a173d5_81af_4ca8_8cbb_e172ae7536f0.slice. Feb 13 21:24:46.987867 systemd[1]: Created slice kubepods-besteffort-poda48dead5_98c2_41d1_85bc_a236403168ea.slice - libcontainer container kubepods-besteffort-poda48dead5_98c2_41d1_85bc_a236403168ea.slice. Feb 13 21:24:46.994648 systemd[1]: Created slice kubepods-burstable-pod702ade0b_3b81_452f_8c6e_de622906e0bd.slice - libcontainer container kubepods-burstable-pod702ade0b_3b81_452f_8c6e_de622906e0bd.slice. Feb 13 21:24:46.999940 systemd[1]: Created slice kubepods-besteffort-pod0324ae41_f4ee_4e56_95ba_ccaaf0b0e39e.slice - libcontainer container kubepods-besteffort-pod0324ae41_f4ee_4e56_95ba_ccaaf0b0e39e.slice. Feb 13 21:24:47.004545 systemd[1]: Created slice kubepods-besteffort-pode73314eb_ec52_4afc_bd2f_8f1593880829.slice - libcontainer container kubepods-besteffort-pode73314eb_ec52_4afc_bd2f_8f1593880829.slice. Feb 13 21:24:47.082207 kubelet[3065]: I0213 21:24:47.082147 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e73314eb-ec52-4afc-bd2f-8f1593880829-calico-apiserver-certs\") pod \"calico-apiserver-548d7dd47c-r9z4v\" (UID: \"e73314eb-ec52-4afc-bd2f-8f1593880829\") " pod="calico-apiserver/calico-apiserver-548d7dd47c-r9z4v" Feb 13 21:24:47.082481 kubelet[3065]: I0213 21:24:47.082221 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a48dead5-98c2-41d1-85bc-a236403168ea-tigera-ca-bundle\") pod \"calico-kube-controllers-598cb55bb5-kr8j8\" (UID: \"a48dead5-98c2-41d1-85bc-a236403168ea\") " pod="calico-system/calico-kube-controllers-598cb55bb5-kr8j8" Feb 13 21:24:47.082481 kubelet[3065]: I0213 21:24:47.082440 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f46q\" (UniqueName: \"kubernetes.io/projected/57a173d5-81af-4ca8-8cbb-e172ae7536f0-kube-api-access-8f46q\") pod \"coredns-6f6b679f8f-8w4xv\" (UID: \"57a173d5-81af-4ca8-8cbb-e172ae7536f0\") " pod="kube-system/coredns-6f6b679f8f-8w4xv" Feb 13 21:24:47.082610 kubelet[3065]: I0213 21:24:47.082497 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/702ade0b-3b81-452f-8c6e-de622906e0bd-config-volume\") pod \"coredns-6f6b679f8f-6r4x2\" (UID: \"702ade0b-3b81-452f-8c6e-de622906e0bd\") " pod="kube-system/coredns-6f6b679f8f-6r4x2" Feb 13 21:24:47.082610 kubelet[3065]: I0213 21:24:47.082528 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wvk\" (UniqueName: \"kubernetes.io/projected/702ade0b-3b81-452f-8c6e-de622906e0bd-kube-api-access-n9wvk\") pod \"coredns-6f6b679f8f-6r4x2\" (UID: \"702ade0b-3b81-452f-8c6e-de622906e0bd\") " pod="kube-system/coredns-6f6b679f8f-6r4x2" Feb 13 21:24:47.082610 kubelet[3065]: I0213 21:24:47.082554 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e-calico-apiserver-certs\") pod \"calico-apiserver-548d7dd47c-ftmtc\" (UID: \"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e\") " pod="calico-apiserver/calico-apiserver-548d7dd47c-ftmtc" Feb 13 21:24:47.082610 kubelet[3065]: I0213 21:24:47.082583 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57a173d5-81af-4ca8-8cbb-e172ae7536f0-config-volume\") pod \"coredns-6f6b679f8f-8w4xv\" (UID: \"57a173d5-81af-4ca8-8cbb-e172ae7536f0\") " pod="kube-system/coredns-6f6b679f8f-8w4xv" Feb 13 21:24:47.082858 kubelet[3065]: I0213 21:24:47.082648 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57gh\" (UniqueName: \"kubernetes.io/projected/e73314eb-ec52-4afc-bd2f-8f1593880829-kube-api-access-v57gh\") pod \"calico-apiserver-548d7dd47c-r9z4v\" (UID: \"e73314eb-ec52-4afc-bd2f-8f1593880829\") " pod="calico-apiserver/calico-apiserver-548d7dd47c-r9z4v" Feb 13 21:24:47.082858 kubelet[3065]: I0213 21:24:47.082717 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bfv\" (UniqueName: \"kubernetes.io/projected/a48dead5-98c2-41d1-85bc-a236403168ea-kube-api-access-v4bfv\") pod \"calico-kube-controllers-598cb55bb5-kr8j8\" (UID: \"a48dead5-98c2-41d1-85bc-a236403168ea\") " pod="calico-system/calico-kube-controllers-598cb55bb5-kr8j8" Feb 13 21:24:47.082858 kubelet[3065]: I0213 21:24:47.082763 3065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np6gs\" (UniqueName: \"kubernetes.io/projected/0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e-kube-api-access-np6gs\") pod \"calico-apiserver-548d7dd47c-ftmtc\" (UID: \"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e\") " pod="calico-apiserver/calico-apiserver-548d7dd47c-ftmtc" Feb 13 21:24:47.284446 containerd[1824]: time="2025-02-13T21:24:47.284321237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8w4xv,Uid:57a173d5-81af-4ca8-8cbb-e172ae7536f0,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:47.291821 containerd[1824]: time="2025-02-13T21:24:47.291786178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598cb55bb5-kr8j8,Uid:a48dead5-98c2-41d1-85bc-a236403168ea,Namespace:calico-system,Attempt:0,}" Feb 13 21:24:47.292782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237-rootfs.mount: Deactivated successfully. Feb 13 21:24:47.297260 containerd[1824]: time="2025-02-13T21:24:47.297208402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6r4x2,Uid:702ade0b-3b81-452f-8c6e-de622906e0bd,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:47.302730 containerd[1824]: time="2025-02-13T21:24:47.302718180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-ftmtc,Uid:0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 21:24:47.307204 containerd[1824]: time="2025-02-13T21:24:47.307088661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-r9z4v,Uid:e73314eb-ec52-4afc-bd2f-8f1593880829,Namespace:calico-apiserver,Attempt:0,}" Feb 13 21:24:47.368074 systemd[1]: Created slice kubepods-besteffort-pod439af64d_2df4_4687_a7d1_8e9f8ba04da6.slice - libcontainer container kubepods-besteffort-pod439af64d_2df4_4687_a7d1_8e9f8ba04da6.slice. Feb 13 21:24:47.373708 containerd[1824]: time="2025-02-13T21:24:47.373593679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w68mc,Uid:439af64d-2df4-4687-a7d1-8e9f8ba04da6,Namespace:calico-system,Attempt:0,}" Feb 13 21:24:47.503484 containerd[1824]: time="2025-02-13T21:24:47.503455566Z" level=info msg="shim disconnected" id=afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237 namespace=k8s.io Feb 13 21:24:47.503484 containerd[1824]: time="2025-02-13T21:24:47.503482775Z" level=warning msg="cleaning up after shim disconnected" id=afbb7e2e5f567fa395b1838efd4fdba1f0b1ebd17a0f943e7945bb68c7a24237 namespace=k8s.io Feb 13 21:24:47.503578 containerd[1824]: time="2025-02-13T21:24:47.503488272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:24:47.535747 containerd[1824]: time="2025-02-13T21:24:47.535665920Z" level=error msg="Failed to destroy network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.535991 containerd[1824]: time="2025-02-13T21:24:47.535958268Z" level=error msg="encountered an error cleaning up failed sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.536278 containerd[1824]: time="2025-02-13T21:24:47.536139415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8w4xv,Uid:57a173d5-81af-4ca8-8cbb-e172ae7536f0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.536446 kubelet[3065]: E0213 21:24:47.536398 3065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.536517 kubelet[3065]: E0213 21:24:47.536495 3065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8w4xv" Feb 13 21:24:47.536553 kubelet[3065]: E0213 21:24:47.536522 3065 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8w4xv" Feb 13 21:24:47.536592 kubelet[3065]: E0213 21:24:47.536571 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8w4xv_kube-system(57a173d5-81af-4ca8-8cbb-e172ae7536f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8w4xv_kube-system(57a173d5-81af-4ca8-8cbb-e172ae7536f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8w4xv" podUID="57a173d5-81af-4ca8-8cbb-e172ae7536f0" Feb 13 21:24:47.541485 containerd[1824]: time="2025-02-13T21:24:47.541438577Z" level=error msg="Failed to destroy network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.541836 containerd[1824]: time="2025-02-13T21:24:47.541813055Z" level=error msg="encountered an error cleaning up failed sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.541920 containerd[1824]: time="2025-02-13T21:24:47.541863698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w68mc,Uid:439af64d-2df4-4687-a7d1-8e9f8ba04da6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542024 kubelet[3065]: E0213 21:24:47.541998 3065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542060 kubelet[3065]: E0213 21:24:47.542047 3065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w68mc" Feb 13 21:24:47.542082 kubelet[3065]: E0213 21:24:47.542068 3065 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w68mc" Feb 13 21:24:47.542150 kubelet[3065]: E0213 21:24:47.542117 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w68mc_calico-system(439af64d-2df4-4687-a7d1-8e9f8ba04da6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w68mc_calico-system(439af64d-2df4-4687-a7d1-8e9f8ba04da6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w68mc" podUID="439af64d-2df4-4687-a7d1-8e9f8ba04da6" Feb 13 21:24:47.542618 containerd[1824]: time="2025-02-13T21:24:47.542577183Z" level=error msg="Failed to destroy network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542769 containerd[1824]: time="2025-02-13T21:24:47.542755452Z" level=error msg="encountered an error cleaning up failed sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542800 containerd[1824]: time="2025-02-13T21:24:47.542784423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598cb55bb5-kr8j8,Uid:a48dead5-98c2-41d1-85bc-a236403168ea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542833 containerd[1824]: time="2025-02-13T21:24:47.542803523Z" level=error msg="Failed to destroy network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542858 containerd[1824]: time="2025-02-13T21:24:47.542842847Z" level=error msg="Failed to destroy network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542890 kubelet[3065]: E0213 21:24:47.542876 3065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542913 kubelet[3065]: E0213 21:24:47.542903 3065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598cb55bb5-kr8j8" Feb 13 21:24:47.542936 containerd[1824]: time="2025-02-13T21:24:47.542892900Z" level=error msg="Failed to destroy network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.542957 kubelet[3065]: E0213 21:24:47.542916 3065 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598cb55bb5-kr8j8" Feb 13 21:24:47.542957 kubelet[3065]: E0213 21:24:47.542938 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-598cb55bb5-kr8j8_calico-system(a48dead5-98c2-41d1-85bc-a236403168ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-598cb55bb5-kr8j8_calico-system(a48dead5-98c2-41d1-85bc-a236403168ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598cb55bb5-kr8j8" podUID="a48dead5-98c2-41d1-85bc-a236403168ea" Feb 13 21:24:47.543009 containerd[1824]: time="2025-02-13T21:24:47.542957476Z" level=error msg="encountered an error cleaning up failed sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543009 containerd[1824]: time="2025-02-13T21:24:47.542978723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-r9z4v,Uid:e73314eb-ec52-4afc-bd2f-8f1593880829,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543009 containerd[1824]: time="2025-02-13T21:24:47.542999355Z" level=error msg="encountered an error cleaning up failed sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543073 containerd[1824]: time="2025-02-13T21:24:47.543020058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-ftmtc,Uid:0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543073 containerd[1824]: time="2025-02-13T21:24:47.543039335Z" level=error msg="encountered an error cleaning up failed sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543073 containerd[1824]: time="2025-02-13T21:24:47.543063823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6r4x2,Uid:702ade0b-3b81-452f-8c6e-de622906e0bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543162 kubelet[3065]: E0213 21:24:47.543044 3065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543162 kubelet[3065]: E0213 21:24:47.543065 3065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d7dd47c-r9z4v" Feb 13 21:24:47.543162 kubelet[3065]: E0213 21:24:47.543075 3065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543162 kubelet[3065]: E0213 21:24:47.543079 3065 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d7dd47c-r9z4v" Feb 13 21:24:47.543250 kubelet[3065]: E0213 21:24:47.543122 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548d7dd47c-r9z4v_calico-apiserver(e73314eb-ec52-4afc-bd2f-8f1593880829)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548d7dd47c-r9z4v_calico-apiserver(e73314eb-ec52-4afc-bd2f-8f1593880829)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548d7dd47c-r9z4v" podUID="e73314eb-ec52-4afc-bd2f-8f1593880829" Feb 13 21:24:47.543250 kubelet[3065]: E0213 21:24:47.543125 3065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:47.543250 kubelet[3065]: E0213 21:24:47.543159 3065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6r4x2" Feb 13 21:24:47.543335 kubelet[3065]: E0213 21:24:47.543170 3065 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6r4x2" Feb 13 21:24:47.543335 kubelet[3065]: E0213 21:24:47.543187 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6r4x2_kube-system(702ade0b-3b81-452f-8c6e-de622906e0bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6r4x2_kube-system(702ade0b-3b81-452f-8c6e-de622906e0bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6r4x2" podUID="702ade0b-3b81-452f-8c6e-de622906e0bd" Feb 13 21:24:47.543335 kubelet[3065]: E0213 21:24:47.543097 3065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d7dd47c-ftmtc" Feb 13 21:24:47.543420 kubelet[3065]: E0213 21:24:47.543218 3065 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d7dd47c-ftmtc" Feb 13 21:24:47.543420 kubelet[3065]: E0213 21:24:47.543232 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548d7dd47c-ftmtc_calico-apiserver(0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548d7dd47c-ftmtc_calico-apiserver(0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548d7dd47c-ftmtc" podUID="0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e" Feb 13 21:24:48.286829 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe-shm.mount: Deactivated successfully. Feb 13 21:24:48.286932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f-shm.mount: Deactivated successfully. Feb 13 21:24:48.287005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2-shm.mount: Deactivated successfully. Feb 13 21:24:48.287071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272-shm.mount: Deactivated successfully. Feb 13 21:24:48.287145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033-shm.mount: Deactivated successfully. Feb 13 21:24:48.287212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7-shm.mount: Deactivated successfully. Feb 13 21:24:48.419183 kubelet[3065]: I0213 21:24:48.419061 3065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:24:48.420593 containerd[1824]: time="2025-02-13T21:24:48.420514654Z" level=info msg="StopPodSandbox for \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\"" Feb 13 21:24:48.421336 containerd[1824]: time="2025-02-13T21:24:48.420960097Z" level=info msg="Ensure that sandbox bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272 in task-service has been cleanup successfully" Feb 13 21:24:48.421474 kubelet[3065]: I0213 21:24:48.421285 3065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:24:48.422460 containerd[1824]: time="2025-02-13T21:24:48.422386735Z" level=info msg="StopPodSandbox for \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\"" Feb 13 21:24:48.422972 containerd[1824]: time="2025-02-13T21:24:48.422898956Z" level=info msg="Ensure that sandbox 1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033 in task-service has been cleanup successfully" Feb 13 21:24:48.423712 kubelet[3065]: I0213 21:24:48.423650 3065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:24:48.424932 containerd[1824]: time="2025-02-13T21:24:48.424846519Z" level=info msg="StopPodSandbox for \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\"" Feb 13 21:24:48.425472 containerd[1824]: time="2025-02-13T21:24:48.425367835Z" level=info msg="Ensure that sandbox 3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe in task-service has been cleanup successfully" Feb 13 21:24:48.426123 kubelet[3065]: I0213 21:24:48.426049 3065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:24:48.427315 containerd[1824]: time="2025-02-13T21:24:48.427196651Z" level=info msg="StopPodSandbox for \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\"" Feb 13 21:24:48.427731 containerd[1824]: time="2025-02-13T21:24:48.427665170Z" level=info msg="Ensure that sandbox b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f in task-service has been cleanup successfully" Feb 13 21:24:48.428751 kubelet[3065]: I0213 21:24:48.428698 3065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:24:48.430015 containerd[1824]: time="2025-02-13T21:24:48.429925550Z" level=info msg="StopPodSandbox for \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\"" Feb 13 21:24:48.430535 containerd[1824]: time="2025-02-13T21:24:48.430455809Z" level=info msg="Ensure that sandbox 95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7 in task-service has been cleanup successfully" Feb 13 21:24:48.435883 kubelet[3065]: I0213 21:24:48.435864 3065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:24:48.436025 containerd[1824]: time="2025-02-13T21:24:48.435891327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 21:24:48.436223 containerd[1824]: time="2025-02-13T21:24:48.436208307Z" level=info msg="StopPodSandbox for \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\"" Feb 13 21:24:48.436327 containerd[1824]: time="2025-02-13T21:24:48.436317364Z" level=info msg="Ensure that sandbox 981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2 in task-service has been cleanup successfully" Feb 13 21:24:48.449411 containerd[1824]: time="2025-02-13T21:24:48.449363046Z" level=error msg="StopPodSandbox for \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\" failed" error="failed to destroy network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:48.449699 kubelet[3065]: E0213 21:24:48.449673 3065 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:24:48.449754 kubelet[3065]: E0213 21:24:48.449716 3065 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033"} Feb 13 21:24:48.449785 kubelet[3065]: E0213 21:24:48.449764 3065 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a48dead5-98c2-41d1-85bc-a236403168ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 21:24:48.449851 kubelet[3065]: E0213 21:24:48.449786 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a48dead5-98c2-41d1-85bc-a236403168ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598cb55bb5-kr8j8" podUID="a48dead5-98c2-41d1-85bc-a236403168ea" Feb 13 21:24:48.449904 containerd[1824]: time="2025-02-13T21:24:48.449779916Z" level=error msg="StopPodSandbox for \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\" failed" error="failed to destroy network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:48.449904 containerd[1824]: time="2025-02-13T21:24:48.449844333Z" level=error msg="StopPodSandbox for \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\" failed" error="failed to destroy network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:48.449951 kubelet[3065]: E0213 21:24:48.449869 3065 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:24:48.449951 kubelet[3065]: E0213 21:24:48.449888 3065 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272"} Feb 13 21:24:48.449951 kubelet[3065]: E0213 21:24:48.449909 3065 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"702ade0b-3b81-452f-8c6e-de622906e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 21:24:48.449951 kubelet[3065]: E0213 21:24:48.449913 3065 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:24:48.450038 kubelet[3065]: E0213 21:24:48.449925 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"702ade0b-3b81-452f-8c6e-de622906e0bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6r4x2" podUID="702ade0b-3b81-452f-8c6e-de622906e0bd" Feb 13 21:24:48.450038 kubelet[3065]: E0213 21:24:48.449931 3065 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7"} Feb 13 21:24:48.450038 kubelet[3065]: E0213 21:24:48.449949 3065 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57a173d5-81af-4ca8-8cbb-e172ae7536f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 21:24:48.450038 kubelet[3065]: E0213 21:24:48.449960 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57a173d5-81af-4ca8-8cbb-e172ae7536f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8w4xv" podUID="57a173d5-81af-4ca8-8cbb-e172ae7536f0" Feb 13 21:24:48.450178 kubelet[3065]: E0213 21:24:48.450150 3065 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:24:48.450178 kubelet[3065]: E0213 21:24:48.450168 3065 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f"} Feb 13 21:24:48.450218 containerd[1824]: time="2025-02-13T21:24:48.450085505Z" level=error msg="StopPodSandbox for \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\" failed" error="failed to destroy network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:48.450238 kubelet[3065]: E0213 21:24:48.450180 3065 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e73314eb-ec52-4afc-bd2f-8f1593880829\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 21:24:48.450238 kubelet[3065]: E0213 21:24:48.450189 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e73314eb-ec52-4afc-bd2f-8f1593880829\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548d7dd47c-r9z4v" podUID="e73314eb-ec52-4afc-bd2f-8f1593880829" Feb 13 21:24:48.450314 containerd[1824]: time="2025-02-13T21:24:48.450299626Z" level=error msg="StopPodSandbox for \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\" failed" error="failed to destroy network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:48.450377 kubelet[3065]: E0213 21:24:48.450365 3065 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:24:48.450400 kubelet[3065]: E0213 21:24:48.450381 3065 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe"} Feb 13 21:24:48.450400 kubelet[3065]: E0213 21:24:48.450396 3065 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"439af64d-2df4-4687-a7d1-8e9f8ba04da6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 21:24:48.450450 kubelet[3065]: E0213 21:24:48.450407 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"439af64d-2df4-4687-a7d1-8e9f8ba04da6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w68mc" podUID="439af64d-2df4-4687-a7d1-8e9f8ba04da6" Feb 13 21:24:48.453533 containerd[1824]: time="2025-02-13T21:24:48.453490889Z" level=error msg="StopPodSandbox for \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\" failed" error="failed to destroy network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:24:48.453677 kubelet[3065]: E0213 21:24:48.453634 3065 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:24:48.453705 kubelet[3065]: E0213 21:24:48.453683 3065 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2"} Feb 13 21:24:48.453705 kubelet[3065]: E0213 21:24:48.453698 3065 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 21:24:48.453753 kubelet[3065]: E0213 21:24:48.453708 3065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548d7dd47c-ftmtc" podUID="0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e" Feb 13 21:24:51.880564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911984964.mount: Deactivated successfully. Feb 13 21:24:51.903199 containerd[1824]: time="2025-02-13T21:24:51.903145562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:51.903400 containerd[1824]: time="2025-02-13T21:24:51.903358879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 21:24:51.903689 containerd[1824]: time="2025-02-13T21:24:51.903642763Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:51.904951 containerd[1824]: time="2025-02-13T21:24:51.904910597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:51.905208 containerd[1824]: time="2025-02-13T21:24:51.905167477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.469248109s" Feb 13 21:24:51.905208 containerd[1824]: time="2025-02-13T21:24:51.905182903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 21:24:51.908747 containerd[1824]: time="2025-02-13T21:24:51.908731343Z" level=info msg="CreateContainer within sandbox \"472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 21:24:51.915941 containerd[1824]: time="2025-02-13T21:24:51.915889095Z" level=info msg="CreateContainer within sandbox \"472ca27424c840cd4dec191361c47fd0c496267bfd466d29998d8578f102c2a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4c2cfda1bfb7441367213e01dca7786558d3a89ba4a88580849abb4c77bba2dd\"" Feb 13 21:24:51.916269 containerd[1824]: time="2025-02-13T21:24:51.916220108Z" level=info msg="StartContainer for \"4c2cfda1bfb7441367213e01dca7786558d3a89ba4a88580849abb4c77bba2dd\"" Feb 13 21:24:51.942357 systemd[1]: Started cri-containerd-4c2cfda1bfb7441367213e01dca7786558d3a89ba4a88580849abb4c77bba2dd.scope - libcontainer container 4c2cfda1bfb7441367213e01dca7786558d3a89ba4a88580849abb4c77bba2dd. Feb 13 21:24:51.957959 containerd[1824]: time="2025-02-13T21:24:51.957902946Z" level=info msg="StartContainer for \"4c2cfda1bfb7441367213e01dca7786558d3a89ba4a88580849abb4c77bba2dd\" returns successfully" Feb 13 21:24:52.024116 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 21:24:52.024165 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 21:24:52.484070 kubelet[3065]: I0213 21:24:52.483956 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j5hhk" podStartSLOduration=1.793738958 podStartE2EDuration="13.483922121s" podCreationTimestamp="2025-02-13 21:24:39 +0000 UTC" firstStartedPulling="2025-02-13 21:24:40.21546495 +0000 UTC m=+10.902647961" lastFinishedPulling="2025-02-13 21:24:51.905648117 +0000 UTC m=+22.592831124" observedRunningTime="2025-02-13 21:24:52.482749116 +0000 UTC m=+23.169932191" watchObservedRunningTime="2025-02-13 21:24:52.483922121 +0000 UTC m=+23.171105172" Feb 13 21:24:56.327047 kubelet[3065]: I0213 21:24:56.326937 3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:24:56.460145 kernel: bpftool[4738]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 21:24:56.607441 systemd-networkd[1610]: vxlan.calico: Link UP Feb 13 21:24:56.607444 systemd-networkd[1610]: vxlan.calico: Gained carrier Feb 13 21:24:58.017675 systemd-networkd[1610]: vxlan.calico: Gained IPv6LL Feb 13 21:24:59.354754 containerd[1824]: time="2025-02-13T21:24:59.354667291Z" level=info msg="StopPodSandbox for \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\"" Feb 13 21:24:59.355916 containerd[1824]: time="2025-02-13T21:24:59.354663764Z" level=info msg="StopPodSandbox for \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\"" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4875] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" iface="eth0" netns="/var/run/netns/cni-5b938994-07b3-cf04-94a3-e23ec77530ab" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4875] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" iface="eth0" netns="/var/run/netns/cni-5b938994-07b3-cf04-94a3-e23ec77530ab" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4875] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" iface="eth0" netns="/var/run/netns/cni-5b938994-07b3-cf04-94a3-e23ec77530ab" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.439 [INFO][4909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.439 [INFO][4909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.439 [INFO][4909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.443 [WARNING][4909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.443 [INFO][4909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.444 [INFO][4909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:24:59.447246 containerd[1824]: 2025-02-13 21:24:59.446 [INFO][4875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:24:59.447605 containerd[1824]: time="2025-02-13T21:24:59.447392765Z" level=info msg="TearDown network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\" successfully" Feb 13 21:24:59.447605 containerd[1824]: time="2025-02-13T21:24:59.447412752Z" level=info msg="StopPodSandbox for \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\" returns successfully" Feb 13 21:24:59.447955 containerd[1824]: time="2025-02-13T21:24:59.447940349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6r4x2,Uid:702ade0b-3b81-452f-8c6e-de622906e0bd,Namespace:kube-system,Attempt:1,}" Feb 13 21:24:59.449084 systemd[1]: run-netns-cni\x2d5b938994\x2d07b3\x2dcf04\x2d94a3\x2de23ec77530ab.mount: Deactivated successfully. Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" iface="eth0" netns="/var/run/netns/cni-44bcf5d6-f823-8228-f43a-782e7685e827" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" iface="eth0" netns="/var/run/netns/cni-44bcf5d6-f823-8228-f43a-782e7685e827" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" iface="eth0" netns="/var/run/netns/cni-44bcf5d6-f823-8228-f43a-782e7685e827" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.426 [INFO][4874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.439 [INFO][4910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.439 [INFO][4910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.444 [INFO][4910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.448 [WARNING][4910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.448 [INFO][4910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.449 [INFO][4910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:24:59.450938 containerd[1824]: 2025-02-13 21:24:59.450 [INFO][4874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:24:59.451262 containerd[1824]: time="2025-02-13T21:24:59.451006400Z" level=info msg="TearDown network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\" successfully" Feb 13 21:24:59.451262 containerd[1824]: time="2025-02-13T21:24:59.451021340Z" level=info msg="StopPodSandbox for \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\" returns successfully" Feb 13 21:24:59.451386 containerd[1824]: time="2025-02-13T21:24:59.451332927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598cb55bb5-kr8j8,Uid:a48dead5-98c2-41d1-85bc-a236403168ea,Namespace:calico-system,Attempt:1,}" Feb 13 21:24:59.452587 systemd[1]: run-netns-cni\x2d44bcf5d6\x2df823\x2d8228\x2df43a\x2d782e7685e827.mount: Deactivated successfully. Feb 13 21:24:59.519914 systemd-networkd[1610]: cali0c2861587d7: Link UP Feb 13 21:24:59.520046 systemd-networkd[1610]: cali0c2861587d7: Gained carrier Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.483 [INFO][4944] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0 calico-kube-controllers-598cb55bb5- calico-system a48dead5-98c2-41d1-85bc-a236403168ea 720 0 2025-02-13 21:24:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:598cb55bb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-a-e8b80a8c0e calico-kube-controllers-598cb55bb5-kr8j8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0c2861587d7 [] []}} ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.483 [INFO][4944] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.497 [INFO][4983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" HandleID="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.503 [INFO][4983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" HandleID="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a8210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-e8b80a8c0e", "pod":"calico-kube-controllers-598cb55bb5-kr8j8", "timestamp":"2025-02-13 21:24:59.497702779 +0000 UTC"}, Hostname:"ci-4081.3.1-a-e8b80a8c0e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.503 [INFO][4983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.503 [INFO][4983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.503 [INFO][4983] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-e8b80a8c0e' Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.505 [INFO][4983] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.508 [INFO][4983] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.510 [INFO][4983] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.512 [INFO][4983] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.513 [INFO][4983] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.513 [INFO][4983] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.514 [INFO][4983] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252 Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.515 [INFO][4983] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.518 [INFO][4983] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.129/26] block=192.168.123.128/26 handle="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.518 [INFO][4983] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.129/26] handle="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.518 [INFO][4983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:24:59.524818 containerd[1824]: 2025-02-13 21:24:59.518 [INFO][4983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.129/26] IPv6=[] ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" HandleID="k8s-pod-network.9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.525269 containerd[1824]: 2025-02-13 21:24:59.519 [INFO][4944] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0", GenerateName:"calico-kube-controllers-598cb55bb5-", Namespace:"calico-system", SelfLink:"", UID:"a48dead5-98c2-41d1-85bc-a236403168ea", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598cb55bb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"", Pod:"calico-kube-controllers-598cb55bb5-kr8j8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c2861587d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:24:59.525269 containerd[1824]: 2025-02-13 21:24:59.519 [INFO][4944] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.129/32] ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.525269 containerd[1824]: 2025-02-13 21:24:59.519 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c2861587d7 ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.525269 containerd[1824]: 2025-02-13 21:24:59.520 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.525269 containerd[1824]: 2025-02-13 21:24:59.520 [INFO][4944] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0", GenerateName:"calico-kube-controllers-598cb55bb5-", Namespace:"calico-system", SelfLink:"", UID:"a48dead5-98c2-41d1-85bc-a236403168ea", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598cb55bb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252", Pod:"calico-kube-controllers-598cb55bb5-kr8j8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c2861587d7", MAC:"4e:c3:e6:d8:59:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:24:59.525269 containerd[1824]: 2025-02-13 21:24:59.524 [INFO][4944] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252" Namespace="calico-system" Pod="calico-kube-controllers-598cb55bb5-kr8j8" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:24:59.533904 containerd[1824]: time="2025-02-13T21:24:59.533828961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:59.533904 containerd[1824]: time="2025-02-13T21:24:59.533856847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:59.534161 containerd[1824]: time="2025-02-13T21:24:59.534097690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:59.534161 containerd[1824]: time="2025-02-13T21:24:59.534149374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:59.553665 systemd[1]: Started cri-containerd-9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252.scope - libcontainer container 9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252. Feb 13 21:24:59.627581 containerd[1824]: time="2025-02-13T21:24:59.627481862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598cb55bb5-kr8j8,Uid:a48dead5-98c2-41d1-85bc-a236403168ea,Namespace:calico-system,Attempt:1,} returns sandbox id \"9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252\"" Feb 13 21:24:59.628443 containerd[1824]: time="2025-02-13T21:24:59.628424561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 21:24:59.629623 systemd-networkd[1610]: cali54252d25725: Link UP Feb 13 21:24:59.629719 systemd-networkd[1610]: cali54252d25725: Gained carrier Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.483 [INFO][4936] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0 coredns-6f6b679f8f- kube-system 702ade0b-3b81-452f-8c6e-de622906e0bd 721 0 2025-02-13 21:24:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-e8b80a8c0e coredns-6f6b679f8f-6r4x2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali54252d25725 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.483 [INFO][4936] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.497 [INFO][4984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" HandleID="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.503 [INFO][4984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" HandleID="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-e8b80a8c0e", "pod":"coredns-6f6b679f8f-6r4x2", "timestamp":"2025-02-13 21:24:59.497756899 +0000 UTC"}, Hostname:"ci-4081.3.1-a-e8b80a8c0e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.503 [INFO][4984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.518 [INFO][4984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.518 [INFO][4984] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-e8b80a8c0e' Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.606 [INFO][4984] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.612 [INFO][4984] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.616 [INFO][4984] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.617 [INFO][4984] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.619 [INFO][4984] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.619 [INFO][4984] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.621 [INFO][4984] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.624 [INFO][4984] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.627 [INFO][4984] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.130/26] block=192.168.123.128/26 handle="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.627 [INFO][4984] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.130/26] handle="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.627 [INFO][4984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:24:59.634906 containerd[1824]: 2025-02-13 21:24:59.627 [INFO][4984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.130/26] IPv6=[] ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" HandleID="k8s-pod-network.9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.635297 containerd[1824]: 2025-02-13 21:24:59.628 [INFO][4936] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"702ade0b-3b81-452f-8c6e-de622906e0bd", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"", Pod:"coredns-6f6b679f8f-6r4x2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54252d25725", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:24:59.635297 containerd[1824]: 2025-02-13 21:24:59.628 [INFO][4936] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.130/32] ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.635297 containerd[1824]: 2025-02-13 21:24:59.629 [INFO][4936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54252d25725 ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.635297 containerd[1824]: 2025-02-13 21:24:59.629 [INFO][4936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.635297 containerd[1824]: 2025-02-13 21:24:59.629 [INFO][4936] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"702ade0b-3b81-452f-8c6e-de622906e0bd", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb", Pod:"coredns-6f6b679f8f-6r4x2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54252d25725", MAC:"42:22:b8:eb:b4:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:24:59.635297 containerd[1824]: 2025-02-13 21:24:59.633 [INFO][4936] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb" Namespace="kube-system" Pod="coredns-6f6b679f8f-6r4x2" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:24:59.660892 containerd[1824]: time="2025-02-13T21:24:59.660613091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:59.660892 containerd[1824]: time="2025-02-13T21:24:59.660849983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:59.660892 containerd[1824]: time="2025-02-13T21:24:59.660858546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:59.661002 containerd[1824]: time="2025-02-13T21:24:59.660901171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:59.685375 systemd[1]: Started cri-containerd-9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb.scope - libcontainer container 9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb. Feb 13 21:24:59.711468 containerd[1824]: time="2025-02-13T21:24:59.711441082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6r4x2,Uid:702ade0b-3b81-452f-8c6e-de622906e0bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb\"" Feb 13 21:24:59.712760 containerd[1824]: time="2025-02-13T21:24:59.712742664Z" level=info msg="CreateContainer within sandbox \"9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 21:24:59.717491 containerd[1824]: time="2025-02-13T21:24:59.717448547Z" level=info msg="CreateContainer within sandbox \"9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a81d616ac62a4b06886a6102b911bfccf0ab75baf7f3970c140cff053543dc2\"" Feb 13 21:24:59.717709 containerd[1824]: time="2025-02-13T21:24:59.717678520Z" level=info msg="StartContainer for \"9a81d616ac62a4b06886a6102b911bfccf0ab75baf7f3970c140cff053543dc2\"" Feb 13 21:24:59.741326 systemd[1]: Started cri-containerd-9a81d616ac62a4b06886a6102b911bfccf0ab75baf7f3970c140cff053543dc2.scope - libcontainer container 9a81d616ac62a4b06886a6102b911bfccf0ab75baf7f3970c140cff053543dc2. Feb 13 21:24:59.755460 containerd[1824]: time="2025-02-13T21:24:59.755432698Z" level=info msg="StartContainer for \"9a81d616ac62a4b06886a6102b911bfccf0ab75baf7f3970c140cff053543dc2\" returns successfully" Feb 13 21:25:00.353191 containerd[1824]: time="2025-02-13T21:25:00.353090837Z" level=info msg="StopPodSandbox for \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\"" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.423 [INFO][5184] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.423 [INFO][5184] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" iface="eth0" netns="/var/run/netns/cni-ffd7ee95-82e2-81e9-906c-090d34f57b7a" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.423 [INFO][5184] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" iface="eth0" netns="/var/run/netns/cni-ffd7ee95-82e2-81e9-906c-090d34f57b7a" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.424 [INFO][5184] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" iface="eth0" netns="/var/run/netns/cni-ffd7ee95-82e2-81e9-906c-090d34f57b7a" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.424 [INFO][5184] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.424 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.442 [INFO][5199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.442 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.442 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.447 [WARNING][5199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.447 [INFO][5199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.449 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:00.451240 containerd[1824]: 2025-02-13 21:25:00.450 [INFO][5184] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:00.452140 containerd[1824]: time="2025-02-13T21:25:00.451405370Z" level=info msg="TearDown network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\" successfully" Feb 13 21:25:00.452140 containerd[1824]: time="2025-02-13T21:25:00.451440093Z" level=info msg="StopPodSandbox for \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\" returns successfully" Feb 13 21:25:00.452140 containerd[1824]: time="2025-02-13T21:25:00.452039049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-ftmtc,Uid:0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 21:25:00.454055 systemd[1]: run-netns-cni\x2dffd7ee95\x2d82e2\x2d81e9\x2d906c\x2d090d34f57b7a.mount: Deactivated successfully. Feb 13 21:25:00.479581 kubelet[3065]: I0213 21:25:00.479534 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6r4x2" podStartSLOduration=26.479519963 podStartE2EDuration="26.479519963s" podCreationTimestamp="2025-02-13 21:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:25:00.479401123 +0000 UTC m=+31.166584130" watchObservedRunningTime="2025-02-13 21:25:00.479519963 +0000 UTC m=+31.166702970" Feb 13 21:25:00.512332 systemd-networkd[1610]: caliaccfda3efdf: Link UP Feb 13 21:25:00.512451 systemd-networkd[1610]: caliaccfda3efdf: Gained carrier Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.478 [INFO][5213] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0 calico-apiserver-548d7dd47c- calico-apiserver 0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e 735 0 2025-02-13 21:24:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548d7dd47c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-e8b80a8c0e calico-apiserver-548d7dd47c-ftmtc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaccfda3efdf [] []}} ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.479 [INFO][5213] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.494 [INFO][5234] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" HandleID="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.498 [INFO][5234] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" HandleID="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003665c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-e8b80a8c0e", "pod":"calico-apiserver-548d7dd47c-ftmtc", "timestamp":"2025-02-13 21:25:00.494598708 +0000 UTC"}, Hostname:"ci-4081.3.1-a-e8b80a8c0e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.498 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.498 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.498 [INFO][5234] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-e8b80a8c0e' Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.499 [INFO][5234] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.501 [INFO][5234] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.503 [INFO][5234] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.504 [INFO][5234] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.505 [INFO][5234] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.505 [INFO][5234] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.506 [INFO][5234] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386 Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.507 [INFO][5234] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.510 [INFO][5234] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.131/26] block=192.168.123.128/26 handle="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.510 [INFO][5234] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.131/26] handle="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.510 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:00.517279 containerd[1824]: 2025-02-13 21:25:00.510 [INFO][5234] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.131/26] IPv6=[] ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" HandleID="k8s-pod-network.cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.517764 containerd[1824]: 2025-02-13 21:25:00.511 [INFO][5213] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"", Pod:"calico-apiserver-548d7dd47c-ftmtc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaccfda3efdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:00.517764 containerd[1824]: 2025-02-13 21:25:00.511 [INFO][5213] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.131/32] ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.517764 containerd[1824]: 2025-02-13 21:25:00.511 [INFO][5213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaccfda3efdf ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.517764 containerd[1824]: 2025-02-13 21:25:00.512 [INFO][5213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.517764 containerd[1824]: 2025-02-13 21:25:00.512 [INFO][5213] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386", Pod:"calico-apiserver-548d7dd47c-ftmtc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaccfda3efdf", MAC:"4a:82:03:1f:8b:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:00.517764 containerd[1824]: 2025-02-13 21:25:00.516 [INFO][5213] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-ftmtc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:00.526153 containerd[1824]: time="2025-02-13T21:25:00.526093717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:25:00.526153 containerd[1824]: time="2025-02-13T21:25:00.526145564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:25:00.526352 containerd[1824]: time="2025-02-13T21:25:00.526165782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:00.526429 containerd[1824]: time="2025-02-13T21:25:00.526391189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:00.551293 systemd[1]: Started cri-containerd-cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386.scope - libcontainer container cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386. Feb 13 21:25:00.573447 containerd[1824]: time="2025-02-13T21:25:00.573395229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-ftmtc,Uid:0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386\"" Feb 13 21:25:00.577198 systemd-networkd[1610]: cali0c2861587d7: Gained IPv6LL Feb 13 21:25:01.345402 systemd-networkd[1610]: cali54252d25725: Gained IPv6LL Feb 13 21:25:01.353970 containerd[1824]: time="2025-02-13T21:25:01.353953566Z" level=info msg="StopPodSandbox for \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\"" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.379 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.379 [INFO][5327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" iface="eth0" netns="/var/run/netns/cni-8c6ada15-4b09-b2ef-818e-608a5171fcfa" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.379 [INFO][5327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" iface="eth0" netns="/var/run/netns/cni-8c6ada15-4b09-b2ef-818e-608a5171fcfa" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.379 [INFO][5327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" iface="eth0" netns="/var/run/netns/cni-8c6ada15-4b09-b2ef-818e-608a5171fcfa" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.379 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.379 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.389 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.389 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.389 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.393 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.393 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.394 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:01.395582 containerd[1824]: 2025-02-13 21:25:01.394 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:01.395877 containerd[1824]: time="2025-02-13T21:25:01.395644559Z" level=info msg="TearDown network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\" successfully" Feb 13 21:25:01.395877 containerd[1824]: time="2025-02-13T21:25:01.395668145Z" level=info msg="StopPodSandbox for \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\" returns successfully" Feb 13 21:25:01.396025 containerd[1824]: time="2025-02-13T21:25:01.396014194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-r9z4v,Uid:e73314eb-ec52-4afc-bd2f-8f1593880829,Namespace:calico-apiserver,Attempt:1,}" Feb 13 21:25:01.449629 systemd[1]: run-netns-cni\x2d8c6ada15\x2d4b09\x2db2ef\x2d818e\x2d608a5171fcfa.mount: Deactivated successfully. Feb 13 21:25:01.451683 systemd-networkd[1610]: calice8bc483d45: Link UP Feb 13 21:25:01.451800 systemd-networkd[1610]: calice8bc483d45: Gained carrier Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.415 [INFO][5354] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0 calico-apiserver-548d7dd47c- calico-apiserver e73314eb-ec52-4afc-bd2f-8f1593880829 752 0 2025-02-13 21:24:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548d7dd47c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-e8b80a8c0e calico-apiserver-548d7dd47c-r9z4v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calice8bc483d45 [] []}} ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.415 [INFO][5354] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.429 [INFO][5374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" HandleID="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.434 [INFO][5374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" HandleID="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c7760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-e8b80a8c0e", "pod":"calico-apiserver-548d7dd47c-r9z4v", "timestamp":"2025-02-13 21:25:01.429306604 +0000 UTC"}, Hostname:"ci-4081.3.1-a-e8b80a8c0e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.434 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.434 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.434 [INFO][5374] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-e8b80a8c0e' Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.435 [INFO][5374] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.438 [INFO][5374] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.441 [INFO][5374] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.442 [INFO][5374] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.444 [INFO][5374] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.444 [INFO][5374] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.445 [INFO][5374] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.447 [INFO][5374] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.449 [INFO][5374] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.132/26] block=192.168.123.128/26 handle="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.450 [INFO][5374] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.132/26] handle="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.450 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:01.456903 containerd[1824]: 2025-02-13 21:25:01.450 [INFO][5374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.132/26] IPv6=[] ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" HandleID="k8s-pod-network.473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.457420 containerd[1824]: 2025-02-13 21:25:01.450 [INFO][5354] cni-plugin/k8s.go 386: Populated endpoint ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e73314eb-ec52-4afc-bd2f-8f1593880829", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"", Pod:"calico-apiserver-548d7dd47c-r9z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice8bc483d45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:01.457420 containerd[1824]: 2025-02-13 21:25:01.450 [INFO][5354] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.132/32] ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.457420 containerd[1824]: 2025-02-13 21:25:01.450 [INFO][5354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice8bc483d45 ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.457420 containerd[1824]: 2025-02-13 21:25:01.451 [INFO][5354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.457420 containerd[1824]: 2025-02-13 21:25:01.451 [INFO][5354] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e73314eb-ec52-4afc-bd2f-8f1593880829", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf", Pod:"calico-apiserver-548d7dd47c-r9z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice8bc483d45", MAC:"b6:b2:03:b1:47:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:01.457420 containerd[1824]: 2025-02-13 21:25:01.456 [INFO][5354] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf" Namespace="calico-apiserver" Pod="calico-apiserver-548d7dd47c-r9z4v" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:01.468509 containerd[1824]: time="2025-02-13T21:25:01.466218171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:25:01.468509 containerd[1824]: time="2025-02-13T21:25:01.466247295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:25:01.468509 containerd[1824]: time="2025-02-13T21:25:01.466254784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:01.468509 containerd[1824]: time="2025-02-13T21:25:01.466297363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:01.488258 systemd[1]: Started cri-containerd-473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf.scope - libcontainer container 473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf. Feb 13 21:25:01.510722 containerd[1824]: time="2025-02-13T21:25:01.510699168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d7dd47c-r9z4v,Uid:e73314eb-ec52-4afc-bd2f-8f1593880829,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf\"" Feb 13 21:25:01.857216 systemd-networkd[1610]: caliaccfda3efdf: Gained IPv6LL Feb 13 21:25:02.025338 containerd[1824]: time="2025-02-13T21:25:02.025284187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:02.025527 containerd[1824]: time="2025-02-13T21:25:02.025478841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 21:25:02.025782 containerd[1824]: time="2025-02-13T21:25:02.025744103Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:02.027084 containerd[1824]: time="2025-02-13T21:25:02.027045085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:02.027367 containerd[1824]: time="2025-02-13T21:25:02.027323574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.398875558s" Feb 13 21:25:02.027367 containerd[1824]: time="2025-02-13T21:25:02.027342167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 21:25:02.027848 containerd[1824]: time="2025-02-13T21:25:02.027807325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 21:25:02.030612 containerd[1824]: time="2025-02-13T21:25:02.030594546Z" level=info msg="CreateContainer within sandbox \"9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 21:25:02.034637 containerd[1824]: time="2025-02-13T21:25:02.034623452Z" level=info msg="CreateContainer within sandbox \"9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"225fed14331f98bcee3c9992a18759903a96d76105f0d31bbf9712a6f8ae5293\"" Feb 13 21:25:02.034837 containerd[1824]: time="2025-02-13T21:25:02.034828094Z" level=info msg="StartContainer for \"225fed14331f98bcee3c9992a18759903a96d76105f0d31bbf9712a6f8ae5293\"" Feb 13 21:25:02.057439 systemd[1]: Started cri-containerd-225fed14331f98bcee3c9992a18759903a96d76105f0d31bbf9712a6f8ae5293.scope - libcontainer container 225fed14331f98bcee3c9992a18759903a96d76105f0d31bbf9712a6f8ae5293. Feb 13 21:25:02.081185 containerd[1824]: time="2025-02-13T21:25:02.081163627Z" level=info msg="StartContainer for \"225fed14331f98bcee3c9992a18759903a96d76105f0d31bbf9712a6f8ae5293\" returns successfully" Feb 13 21:25:02.353827 containerd[1824]: time="2025-02-13T21:25:02.353737988Z" level=info msg="StopPodSandbox for \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\"" Feb 13 21:25:02.494976 kubelet[3065]: I0213 21:25:02.494854 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-598cb55bb5-kr8j8" podStartSLOduration=21.095310097 podStartE2EDuration="23.494818882s" podCreationTimestamp="2025-02-13 21:24:39 +0000 UTC" firstStartedPulling="2025-02-13 21:24:59.628235027 +0000 UTC m=+30.315418041" lastFinishedPulling="2025-02-13 21:25:02.02774382 +0000 UTC m=+32.714926826" observedRunningTime="2025-02-13 21:25:02.493845393 +0000 UTC m=+33.181028441" watchObservedRunningTime="2025-02-13 21:25:02.494818882 +0000 UTC m=+33.182001913" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.455 [INFO][5516] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.455 [INFO][5516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" iface="eth0" netns="/var/run/netns/cni-cc568343-a710-ade1-fd1f-b3da5890c410" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.456 [INFO][5516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" iface="eth0" netns="/var/run/netns/cni-cc568343-a710-ade1-fd1f-b3da5890c410" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.456 [INFO][5516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" iface="eth0" netns="/var/run/netns/cni-cc568343-a710-ade1-fd1f-b3da5890c410" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.456 [INFO][5516] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.456 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.495 [INFO][5530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.495 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.495 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.502 [WARNING][5530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.502 [INFO][5530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.503 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:02.506069 containerd[1824]: 2025-02-13 21:25:02.504 [INFO][5516] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:02.506823 containerd[1824]: time="2025-02-13T21:25:02.506216078Z" level=info msg="TearDown network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\" successfully" Feb 13 21:25:02.506823 containerd[1824]: time="2025-02-13T21:25:02.506245167Z" level=info msg="StopPodSandbox for \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\" returns successfully" Feb 13 21:25:02.506937 containerd[1824]: time="2025-02-13T21:25:02.506902359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8w4xv,Uid:57a173d5-81af-4ca8-8cbb-e172ae7536f0,Namespace:kube-system,Attempt:1,}" Feb 13 21:25:02.509072 systemd[1]: run-netns-cni\x2dcc568343\x2da710\x2dade1\x2dfd1f\x2db3da5890c410.mount: Deactivated successfully. Feb 13 21:25:02.565794 systemd-networkd[1610]: cali394dbb11886: Link UP Feb 13 21:25:02.565926 systemd-networkd[1610]: cali394dbb11886: Gained carrier Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.527 [INFO][5547] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0 coredns-6f6b679f8f- kube-system 57a173d5-81af-4ca8-8cbb-e172ae7536f0 763 0 2025-02-13 21:24:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-e8b80a8c0e coredns-6f6b679f8f-8w4xv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali394dbb11886 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.527 [INFO][5547] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.541 [INFO][5570] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" HandleID="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.547 [INFO][5570] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" HandleID="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044d5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-e8b80a8c0e", "pod":"coredns-6f6b679f8f-8w4xv", "timestamp":"2025-02-13 21:25:02.541913698 +0000 UTC"}, Hostname:"ci-4081.3.1-a-e8b80a8c0e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.547 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.547 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.547 [INFO][5570] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-e8b80a8c0e' Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.549 [INFO][5570] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.551 [INFO][5570] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.555 [INFO][5570] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.556 [INFO][5570] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.558 [INFO][5570] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.558 [INFO][5570] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.558 [INFO][5570] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8 Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.561 [INFO][5570] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.563 [INFO][5570] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.133/26] block=192.168.123.128/26 handle="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.564 [INFO][5570] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.133/26] handle="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.564 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:02.570987 containerd[1824]: 2025-02-13 21:25:02.564 [INFO][5570] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.133/26] IPv6=[] ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" HandleID="k8s-pod-network.4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.571376 containerd[1824]: 2025-02-13 21:25:02.564 [INFO][5547] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"57a173d5-81af-4ca8-8cbb-e172ae7536f0", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"", Pod:"coredns-6f6b679f8f-8w4xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali394dbb11886", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:02.571376 containerd[1824]: 2025-02-13 21:25:02.565 [INFO][5547] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.133/32] ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.571376 containerd[1824]: 2025-02-13 21:25:02.565 [INFO][5547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali394dbb11886 ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.571376 containerd[1824]: 2025-02-13 21:25:02.565 [INFO][5547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.571376 containerd[1824]: 2025-02-13 21:25:02.565 [INFO][5547] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"57a173d5-81af-4ca8-8cbb-e172ae7536f0", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8", Pod:"coredns-6f6b679f8f-8w4xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali394dbb11886", MAC:"92:79:a5:75:e7:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:02.571376 containerd[1824]: 2025-02-13 21:25:02.570 [INFO][5547] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-8w4xv" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:02.580320 containerd[1824]: time="2025-02-13T21:25:02.580274977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:25:02.580320 containerd[1824]: time="2025-02-13T21:25:02.580309857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:25:02.580592 containerd[1824]: time="2025-02-13T21:25:02.580512880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:02.580592 containerd[1824]: time="2025-02-13T21:25:02.580569164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:02.597395 systemd[1]: Started cri-containerd-4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8.scope - libcontainer container 4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8. Feb 13 21:25:02.622994 containerd[1824]: time="2025-02-13T21:25:02.622932818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8w4xv,Uid:57a173d5-81af-4ca8-8cbb-e172ae7536f0,Namespace:kube-system,Attempt:1,} returns sandbox id \"4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8\"" Feb 13 21:25:02.624238 containerd[1824]: time="2025-02-13T21:25:02.624222581Z" level=info msg="CreateContainer within sandbox \"4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 21:25:02.628516 containerd[1824]: time="2025-02-13T21:25:02.628474072Z" level=info msg="CreateContainer within sandbox \"4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfeae09e655c645f962fcab1753268fd068d3cb15117060bca0b5de049a2c74d\"" Feb 13 21:25:02.628739 containerd[1824]: time="2025-02-13T21:25:02.628689685Z" level=info msg="StartContainer for \"cfeae09e655c645f962fcab1753268fd068d3cb15117060bca0b5de049a2c74d\"" Feb 13 21:25:02.646244 systemd[1]: Started cri-containerd-cfeae09e655c645f962fcab1753268fd068d3cb15117060bca0b5de049a2c74d.scope - libcontainer container cfeae09e655c645f962fcab1753268fd068d3cb15117060bca0b5de049a2c74d. Feb 13 21:25:02.659378 containerd[1824]: time="2025-02-13T21:25:02.659351683Z" level=info msg="StartContainer for \"cfeae09e655c645f962fcab1753268fd068d3cb15117060bca0b5de049a2c74d\" returns successfully" Feb 13 21:25:03.009367 systemd-networkd[1610]: calice8bc483d45: Gained IPv6LL Feb 13 21:25:03.354757 containerd[1824]: time="2025-02-13T21:25:03.354511711Z" level=info msg="StopPodSandbox for \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\"" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.395 [INFO][5700] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.395 [INFO][5700] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" iface="eth0" netns="/var/run/netns/cni-4e54d2f3-21e4-2ab6-a7f8-4c61368bffc1" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.396 [INFO][5700] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" iface="eth0" netns="/var/run/netns/cni-4e54d2f3-21e4-2ab6-a7f8-4c61368bffc1" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.396 [INFO][5700] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" iface="eth0" netns="/var/run/netns/cni-4e54d2f3-21e4-2ab6-a7f8-4c61368bffc1" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.396 [INFO][5700] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.396 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.440 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.441 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.441 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.449 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.449 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.451 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:03.454423 containerd[1824]: 2025-02-13 21:25:03.453 [INFO][5700] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:03.454892 containerd[1824]: time="2025-02-13T21:25:03.454524020Z" level=info msg="TearDown network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\" successfully" Feb 13 21:25:03.454892 containerd[1824]: time="2025-02-13T21:25:03.454538605Z" level=info msg="StopPodSandbox for \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\" returns successfully" Feb 13 21:25:03.455176 containerd[1824]: time="2025-02-13T21:25:03.455120078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w68mc,Uid:439af64d-2df4-4687-a7d1-8e9f8ba04da6,Namespace:calico-system,Attempt:1,}" Feb 13 21:25:03.456498 systemd[1]: run-netns-cni\x2d4e54d2f3\x2d21e4\x2d2ab6\x2da7f8\x2d4c61368bffc1.mount: Deactivated successfully. Feb 13 21:25:03.487040 kubelet[3065]: I0213 21:25:03.486993 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8w4xv" podStartSLOduration=29.486975121 podStartE2EDuration="29.486975121s" podCreationTimestamp="2025-02-13 21:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:25:03.486713202 +0000 UTC m=+34.173896211" watchObservedRunningTime="2025-02-13 21:25:03.486975121 +0000 UTC m=+34.174158125" Feb 13 21:25:03.512003 systemd-networkd[1610]: cali22df9fcb1ac: Link UP Feb 13 21:25:03.512135 systemd-networkd[1610]: cali22df9fcb1ac: Gained carrier Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.478 [INFO][5729] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0 csi-node-driver- calico-system 439af64d-2df4-4687-a7d1-8e9f8ba04da6 775 0 2025-02-13 21:24:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-a-e8b80a8c0e csi-node-driver-w68mc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22df9fcb1ac [] []}} ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.478 [INFO][5729] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.492 [INFO][5751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" HandleID="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.497 [INFO][5751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" HandleID="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000525ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-e8b80a8c0e", "pod":"csi-node-driver-w68mc", "timestamp":"2025-02-13 21:25:03.492965217 +0000 UTC"}, Hostname:"ci-4081.3.1-a-e8b80a8c0e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.497 [INFO][5751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.497 [INFO][5751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.497 [INFO][5751] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-e8b80a8c0e' Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.498 [INFO][5751] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.500 [INFO][5751] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.502 [INFO][5751] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.503 [INFO][5751] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.504 [INFO][5751] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.504 [INFO][5751] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.504 [INFO][5751] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56 Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.507 [INFO][5751] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.510 [INFO][5751] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.134/26] block=192.168.123.128/26 handle="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.510 [INFO][5751] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.134/26] handle="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" host="ci-4081.3.1-a-e8b80a8c0e" Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.510 [INFO][5751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:03.517772 containerd[1824]: 2025-02-13 21:25:03.510 [INFO][5751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.134/26] IPv6=[] ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" HandleID="k8s-pod-network.dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.518313 containerd[1824]: 2025-02-13 21:25:03.511 [INFO][5729] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"439af64d-2df4-4687-a7d1-8e9f8ba04da6", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"", Pod:"csi-node-driver-w68mc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22df9fcb1ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:03.518313 containerd[1824]: 2025-02-13 21:25:03.511 [INFO][5729] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.134/32] ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.518313 containerd[1824]: 2025-02-13 21:25:03.511 [INFO][5729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22df9fcb1ac ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.518313 containerd[1824]: 2025-02-13 21:25:03.512 [INFO][5729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.518313 containerd[1824]: 2025-02-13 21:25:03.512 [INFO][5729] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"439af64d-2df4-4687-a7d1-8e9f8ba04da6", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56", Pod:"csi-node-driver-w68mc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22df9fcb1ac", MAC:"f2:32:04:03:88:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:03.518313 containerd[1824]: 2025-02-13 21:25:03.516 [INFO][5729] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56" Namespace="calico-system" Pod="csi-node-driver-w68mc" WorkloadEndpoint="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:03.531496 containerd[1824]: time="2025-02-13T21:25:03.531421121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:25:03.531496 containerd[1824]: time="2025-02-13T21:25:03.531456016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:25:03.531496 containerd[1824]: time="2025-02-13T21:25:03.531463439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:03.531607 containerd[1824]: time="2025-02-13T21:25:03.531505812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:03.553424 systemd[1]: Started cri-containerd-dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56.scope - libcontainer container dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56. Feb 13 21:25:03.565333 containerd[1824]: time="2025-02-13T21:25:03.565263030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w68mc,Uid:439af64d-2df4-4687-a7d1-8e9f8ba04da6,Namespace:calico-system,Attempt:1,} returns sandbox id \"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56\"" Feb 13 21:25:03.905411 systemd-networkd[1610]: cali394dbb11886: Gained IPv6LL Feb 13 21:25:04.836943 containerd[1824]: time="2025-02-13T21:25:04.836891638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:04.837155 containerd[1824]: time="2025-02-13T21:25:04.837107513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 21:25:04.837492 containerd[1824]: time="2025-02-13T21:25:04.837436958Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:04.838612 containerd[1824]: time="2025-02-13T21:25:04.838571081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:04.838949 containerd[1824]: time="2025-02-13T21:25:04.838907626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.811084664s" Feb 13 21:25:04.838949 containerd[1824]: time="2025-02-13T21:25:04.838924336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 21:25:04.839442 containerd[1824]: time="2025-02-13T21:25:04.839393892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 21:25:04.839886 containerd[1824]: time="2025-02-13T21:25:04.839834176Z" level=info msg="CreateContainer within sandbox \"cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 21:25:04.843700 containerd[1824]: time="2025-02-13T21:25:04.843655563Z" level=info msg="CreateContainer within sandbox \"cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c556b7805091c51a922c74a682a4246db874bf79eb4613af61f46692f7b5bc34\"" Feb 13 21:25:04.843917 containerd[1824]: time="2025-02-13T21:25:04.843903926Z" level=info msg="StartContainer for \"c556b7805091c51a922c74a682a4246db874bf79eb4613af61f46692f7b5bc34\"" Feb 13 21:25:04.875394 systemd[1]: Started cri-containerd-c556b7805091c51a922c74a682a4246db874bf79eb4613af61f46692f7b5bc34.scope - libcontainer container c556b7805091c51a922c74a682a4246db874bf79eb4613af61f46692f7b5bc34. Feb 13 21:25:04.902536 containerd[1824]: time="2025-02-13T21:25:04.902482049Z" level=info msg="StartContainer for \"c556b7805091c51a922c74a682a4246db874bf79eb4613af61f46692f7b5bc34\" returns successfully" Feb 13 21:25:05.057247 systemd-networkd[1610]: cali22df9fcb1ac: Gained IPv6LL Feb 13 21:25:05.307049 containerd[1824]: time="2025-02-13T21:25:05.307021712Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:05.307229 containerd[1824]: time="2025-02-13T21:25:05.307209760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 21:25:05.309202 containerd[1824]: time="2025-02-13T21:25:05.309187496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 469.779478ms" Feb 13 21:25:05.309244 containerd[1824]: time="2025-02-13T21:25:05.309203417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 21:25:05.309866 containerd[1824]: time="2025-02-13T21:25:05.309834227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 21:25:05.310491 containerd[1824]: time="2025-02-13T21:25:05.310476013Z" level=info msg="CreateContainer within sandbox \"473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 21:25:05.332141 containerd[1824]: time="2025-02-13T21:25:05.332125132Z" level=info msg="CreateContainer within sandbox \"473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0fcb14ad322ed39c360cffaf4b76771f3e7c71cb0a5405bf3c5ca0a03fb928f3\"" Feb 13 21:25:05.332443 containerd[1824]: time="2025-02-13T21:25:05.332412222Z" level=info msg="StartContainer for \"0fcb14ad322ed39c360cffaf4b76771f3e7c71cb0a5405bf3c5ca0a03fb928f3\"" Feb 13 21:25:05.353241 systemd[1]: Started cri-containerd-0fcb14ad322ed39c360cffaf4b76771f3e7c71cb0a5405bf3c5ca0a03fb928f3.scope - libcontainer container 0fcb14ad322ed39c360cffaf4b76771f3e7c71cb0a5405bf3c5ca0a03fb928f3. Feb 13 21:25:05.376897 containerd[1824]: time="2025-02-13T21:25:05.376871928Z" level=info msg="StartContainer for \"0fcb14ad322ed39c360cffaf4b76771f3e7c71cb0a5405bf3c5ca0a03fb928f3\" returns successfully" Feb 13 21:25:05.495007 kubelet[3065]: I0213 21:25:05.494978 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548d7dd47c-r9z4v" podStartSLOduration=22.696512136 podStartE2EDuration="26.494964708s" podCreationTimestamp="2025-02-13 21:24:39 +0000 UTC" firstStartedPulling="2025-02-13 21:25:01.51127012 +0000 UTC m=+32.198453128" lastFinishedPulling="2025-02-13 21:25:05.30972269 +0000 UTC m=+35.996905700" observedRunningTime="2025-02-13 21:25:05.494791844 +0000 UTC m=+36.181974852" watchObservedRunningTime="2025-02-13 21:25:05.494964708 +0000 UTC m=+36.182147711" Feb 13 21:25:05.499280 kubelet[3065]: I0213 21:25:05.499191 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-548d7dd47c-ftmtc" podStartSLOduration=22.23384863 podStartE2EDuration="26.499177005s" podCreationTimestamp="2025-02-13 21:24:39 +0000 UTC" firstStartedPulling="2025-02-13 21:25:00.573998073 +0000 UTC m=+31.261181079" lastFinishedPulling="2025-02-13 21:25:04.839326449 +0000 UTC m=+35.526509454" observedRunningTime="2025-02-13 21:25:05.498938046 +0000 UTC m=+36.186121054" watchObservedRunningTime="2025-02-13 21:25:05.499177005 +0000 UTC m=+36.186360012" Feb 13 21:25:06.494988 kubelet[3065]: I0213 21:25:06.494916 3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:25:07.512059 containerd[1824]: time="2025-02-13T21:25:07.512013108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:07.512350 containerd[1824]: time="2025-02-13T21:25:07.512258266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 21:25:07.512736 containerd[1824]: time="2025-02-13T21:25:07.512695766Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:07.513668 containerd[1824]: time="2025-02-13T21:25:07.513621066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:07.514383 containerd[1824]: time="2025-02-13T21:25:07.514343350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.204472695s" Feb 13 21:25:07.514383 containerd[1824]: time="2025-02-13T21:25:07.514358413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 21:25:07.515471 containerd[1824]: time="2025-02-13T21:25:07.515435524Z" level=info msg="CreateContainer within sandbox \"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 21:25:07.521031 containerd[1824]: time="2025-02-13T21:25:07.520988964Z" level=info msg="CreateContainer within sandbox \"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"04ffa0664bb229fdc521b5d415ce83064e1e824b52302a1976c1d71a6bd793d5\"" Feb 13 21:25:07.521351 containerd[1824]: time="2025-02-13T21:25:07.521283954Z" level=info msg="StartContainer for \"04ffa0664bb229fdc521b5d415ce83064e1e824b52302a1976c1d71a6bd793d5\"" Feb 13 21:25:07.544448 systemd[1]: Started cri-containerd-04ffa0664bb229fdc521b5d415ce83064e1e824b52302a1976c1d71a6bd793d5.scope - libcontainer container 04ffa0664bb229fdc521b5d415ce83064e1e824b52302a1976c1d71a6bd793d5. Feb 13 21:25:07.557126 containerd[1824]: time="2025-02-13T21:25:07.557093453Z" level=info msg="StartContainer for \"04ffa0664bb229fdc521b5d415ce83064e1e824b52302a1976c1d71a6bd793d5\" returns successfully" Feb 13 21:25:07.557722 containerd[1824]: time="2025-02-13T21:25:07.557704939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 21:25:08.851552 kubelet[3065]: I0213 21:25:08.851474 3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:25:09.314546 containerd[1824]: time="2025-02-13T21:25:09.314488421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:09.314769 containerd[1824]: time="2025-02-13T21:25:09.314720127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 21:25:09.315055 containerd[1824]: time="2025-02-13T21:25:09.315043448Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:09.316202 containerd[1824]: time="2025-02-13T21:25:09.316153425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:25:09.316580 containerd[1824]: time="2025-02-13T21:25:09.316537785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.758808579s" Feb 13 21:25:09.316580 containerd[1824]: time="2025-02-13T21:25:09.316556017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 21:25:09.317524 containerd[1824]: time="2025-02-13T21:25:09.317495415Z" level=info msg="CreateContainer within sandbox \"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 21:25:09.322615 containerd[1824]: time="2025-02-13T21:25:09.322597704Z" level=info msg="CreateContainer within sandbox \"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f68395ccf08b2898c761248b11b8fcaa0048ffbb58313c281072d7458c60a00a\"" Feb 13 21:25:09.322891 containerd[1824]: time="2025-02-13T21:25:09.322875320Z" level=info msg="StartContainer for \"f68395ccf08b2898c761248b11b8fcaa0048ffbb58313c281072d7458c60a00a\"" Feb 13 21:25:09.346223 systemd[1]: Started cri-containerd-f68395ccf08b2898c761248b11b8fcaa0048ffbb58313c281072d7458c60a00a.scope - libcontainer container f68395ccf08b2898c761248b11b8fcaa0048ffbb58313c281072d7458c60a00a. Feb 13 21:25:09.359076 containerd[1824]: time="2025-02-13T21:25:09.359052077Z" level=info msg="StartContainer for \"f68395ccf08b2898c761248b11b8fcaa0048ffbb58313c281072d7458c60a00a\" returns successfully" Feb 13 21:25:09.388857 kubelet[3065]: I0213 21:25:09.388828 3065 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 21:25:09.388857 kubelet[3065]: I0213 21:25:09.388860 3065 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 21:25:09.508979 kubelet[3065]: I0213 21:25:09.508942 3065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w68mc" podStartSLOduration=24.757981061 podStartE2EDuration="30.508929971s" podCreationTimestamp="2025-02-13 21:24:39 +0000 UTC" firstStartedPulling="2025-02-13 21:25:03.565962575 +0000 UTC m=+34.253145586" lastFinishedPulling="2025-02-13 21:25:09.31691149 +0000 UTC m=+40.004094496" observedRunningTime="2025-02-13 21:25:09.508578705 +0000 UTC m=+40.195761723" watchObservedRunningTime="2025-02-13 21:25:09.508929971 +0000 UTC m=+40.196112981" Feb 13 21:25:12.101174 kubelet[3065]: I0213 21:25:12.101062 3065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:25:29.350845 containerd[1824]: time="2025-02-13T21:25:29.350757727Z" level=info msg="StopPodSandbox for \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\"" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.370 [WARNING][6167] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"702ade0b-3b81-452f-8c6e-de622906e0bd", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb", Pod:"coredns-6f6b679f8f-6r4x2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54252d25725", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.370 [INFO][6167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.370 [INFO][6167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" iface="eth0" netns="" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.370 [INFO][6167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.370 [INFO][6167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.382 [INFO][6182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.383 [INFO][6182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.383 [INFO][6182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.387 [WARNING][6182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.387 [INFO][6182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.387 [INFO][6182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.389355 containerd[1824]: 2025-02-13 21:25:29.388 [INFO][6167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.389883 containerd[1824]: time="2025-02-13T21:25:29.389379326Z" level=info msg="TearDown network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\" successfully" Feb 13 21:25:29.389883 containerd[1824]: time="2025-02-13T21:25:29.389399248Z" level=info msg="StopPodSandbox for \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\" returns successfully" Feb 13 21:25:29.389883 containerd[1824]: time="2025-02-13T21:25:29.389725110Z" level=info msg="RemovePodSandbox for \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\"" Feb 13 21:25:29.389883 containerd[1824]: time="2025-02-13T21:25:29.389750186Z" level=info msg="Forcibly stopping sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\"" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.410 [WARNING][6212] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"702ade0b-3b81-452f-8c6e-de622906e0bd", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"9b829e7ab6178767ec8f200a3fe66b633e338a7735e046a402352dd31fdbc0eb", Pod:"coredns-6f6b679f8f-6r4x2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54252d25725", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.410 [INFO][6212] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.410 [INFO][6212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" iface="eth0" netns="" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.410 [INFO][6212] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.410 [INFO][6212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.421 [INFO][6225] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.421 [INFO][6225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.421 [INFO][6225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.425 [WARNING][6225] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.425 [INFO][6225] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" HandleID="k8s-pod-network.bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--6r4x2-eth0" Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.426 [INFO][6225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.427638 containerd[1824]: 2025-02-13 21:25:29.426 [INFO][6212] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272" Feb 13 21:25:29.427638 containerd[1824]: time="2025-02-13T21:25:29.427638455Z" level=info msg="TearDown network for sandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\" successfully" Feb 13 21:25:29.429062 containerd[1824]: time="2025-02-13T21:25:29.429048563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:25:29.429093 containerd[1824]: time="2025-02-13T21:25:29.429078147Z" level=info msg="RemovePodSandbox \"bff930e35d1dee4f2fb120c25fef0786a33cb4fd782ba1e5d25834a2cd843272\" returns successfully" Feb 13 21:25:29.429412 containerd[1824]: time="2025-02-13T21:25:29.429401006Z" level=info msg="StopPodSandbox for \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\"" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.447 [WARNING][6251] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0", GenerateName:"calico-kube-controllers-598cb55bb5-", Namespace:"calico-system", SelfLink:"", UID:"a48dead5-98c2-41d1-85bc-a236403168ea", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598cb55bb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252", Pod:"calico-kube-controllers-598cb55bb5-kr8j8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c2861587d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.447 [INFO][6251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.447 [INFO][6251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" iface="eth0" netns="" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.447 [INFO][6251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.447 [INFO][6251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.458 [INFO][6265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.458 [INFO][6265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.458 [INFO][6265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.462 [WARNING][6265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.462 [INFO][6265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.463 [INFO][6265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.464729 containerd[1824]: 2025-02-13 21:25:29.464 [INFO][6251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.465043 containerd[1824]: time="2025-02-13T21:25:29.464753119Z" level=info msg="TearDown network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\" successfully" Feb 13 21:25:29.465043 containerd[1824]: time="2025-02-13T21:25:29.464772458Z" level=info msg="StopPodSandbox for \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\" returns successfully" Feb 13 21:25:29.465086 containerd[1824]: time="2025-02-13T21:25:29.465061379Z" level=info msg="RemovePodSandbox for \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\"" Feb 13 21:25:29.465086 containerd[1824]: time="2025-02-13T21:25:29.465079381Z" level=info msg="Forcibly stopping sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\"" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.485 [WARNING][6293] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0", GenerateName:"calico-kube-controllers-598cb55bb5-", Namespace:"calico-system", SelfLink:"", UID:"a48dead5-98c2-41d1-85bc-a236403168ea", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598cb55bb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"9eb7c29745e2f206d29dcb7230e101edeb844a846f6c924183584b78b1610252", Pod:"calico-kube-controllers-598cb55bb5-kr8j8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c2861587d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.485 [INFO][6293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.485 [INFO][6293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" iface="eth0" netns="" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.485 [INFO][6293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.485 [INFO][6293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.499 [INFO][6308] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.499 [INFO][6308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.499 [INFO][6308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.503 [WARNING][6308] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.503 [INFO][6308] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" HandleID="k8s-pod-network.1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--kube--controllers--598cb55bb5--kr8j8-eth0" Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.505 [INFO][6308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.506823 containerd[1824]: 2025-02-13 21:25:29.505 [INFO][6293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033" Feb 13 21:25:29.507267 containerd[1824]: time="2025-02-13T21:25:29.506852699Z" level=info msg="TearDown network for sandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\" successfully" Feb 13 21:25:29.508405 containerd[1824]: time="2025-02-13T21:25:29.508393842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:25:29.508434 containerd[1824]: time="2025-02-13T21:25:29.508418122Z" level=info msg="RemovePodSandbox \"1c4b96b8640d6f730a392a121517f090f5533d91caef0467484be183dd472033\" returns successfully" Feb 13 21:25:29.508667 containerd[1824]: time="2025-02-13T21:25:29.508657311Z" level=info msg="StopPodSandbox for \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\"" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.526 [WARNING][6337] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e73314eb-ec52-4afc-bd2f-8f1593880829", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf", Pod:"calico-apiserver-548d7dd47c-r9z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice8bc483d45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.526 [INFO][6337] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.526 [INFO][6337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" iface="eth0" netns="" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.526 [INFO][6337] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.527 [INFO][6337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.537 [INFO][6349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.537 [INFO][6349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.537 [INFO][6349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.540 [WARNING][6349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.540 [INFO][6349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.541 [INFO][6349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.543298 containerd[1824]: 2025-02-13 21:25:29.542 [INFO][6337] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.543714 containerd[1824]: time="2025-02-13T21:25:29.543321904Z" level=info msg="TearDown network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\" successfully" Feb 13 21:25:29.543714 containerd[1824]: time="2025-02-13T21:25:29.543337711Z" level=info msg="StopPodSandbox for \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\" returns successfully" Feb 13 21:25:29.543714 containerd[1824]: time="2025-02-13T21:25:29.543603914Z" level=info msg="RemovePodSandbox for \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\"" Feb 13 21:25:29.543714 containerd[1824]: time="2025-02-13T21:25:29.543624870Z" level=info msg="Forcibly stopping sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\"" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.564 [WARNING][6376] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e73314eb-ec52-4afc-bd2f-8f1593880829", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"473d8399ed3b0507db0823b96364248363b5cf0382f8946515154a8ece7e62bf", Pod:"calico-apiserver-548d7dd47c-r9z4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice8bc483d45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.564 [INFO][6376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.564 [INFO][6376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" iface="eth0" netns="" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.565 [INFO][6376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.565 [INFO][6376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.577 [INFO][6392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.577 [INFO][6392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.577 [INFO][6392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.581 [WARNING][6392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.581 [INFO][6392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" HandleID="k8s-pod-network.b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--r9z4v-eth0" Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.582 [INFO][6392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.584025 containerd[1824]: 2025-02-13 21:25:29.583 [INFO][6376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f" Feb 13 21:25:29.584358 containerd[1824]: time="2025-02-13T21:25:29.584052213Z" level=info msg="TearDown network for sandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\" successfully" Feb 13 21:25:29.585374 containerd[1824]: time="2025-02-13T21:25:29.585361525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:25:29.585413 containerd[1824]: time="2025-02-13T21:25:29.585387040Z" level=info msg="RemovePodSandbox \"b6d946f1b37b14d10cbb734b7f190ce3dee7e2161e9789b7ea575df939c7549f\" returns successfully" Feb 13 21:25:29.585698 containerd[1824]: time="2025-02-13T21:25:29.585653149Z" level=info msg="StopPodSandbox for \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\"" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.604 [WARNING][6418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386", Pod:"calico-apiserver-548d7dd47c-ftmtc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaccfda3efdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.604 [INFO][6418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.604 [INFO][6418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" iface="eth0" netns="" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.604 [INFO][6418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.604 [INFO][6418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.614 [INFO][6430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.614 [INFO][6430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.614 [INFO][6430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.618 [WARNING][6430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.618 [INFO][6430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.619 [INFO][6430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.621159 containerd[1824]: 2025-02-13 21:25:29.620 [INFO][6418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.621159 containerd[1824]: time="2025-02-13T21:25:29.621126372Z" level=info msg="TearDown network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\" successfully" Feb 13 21:25:29.621159 containerd[1824]: time="2025-02-13T21:25:29.621143691Z" level=info msg="StopPodSandbox for \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\" returns successfully" Feb 13 21:25:29.621457 containerd[1824]: time="2025-02-13T21:25:29.621422551Z" level=info msg="RemovePodSandbox for \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\"" Feb 13 21:25:29.621457 containerd[1824]: time="2025-02-13T21:25:29.621439542Z" level=info msg="Forcibly stopping sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\"" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.640 [WARNING][6456] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0", GenerateName:"calico-apiserver-548d7dd47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0324ae41-f4ee-4e56-95ba-ccaaf0b0e39e", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d7dd47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"cf95e74d27ceb274b9f3c8aee95522115b25fe32a341ac719a3e8b0ed5b82386", Pod:"calico-apiserver-548d7dd47c-ftmtc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaccfda3efdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.640 [INFO][6456] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.640 [INFO][6456] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" iface="eth0" netns="" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.640 [INFO][6456] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.640 [INFO][6456] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.651 [INFO][6471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.651 [INFO][6471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.651 [INFO][6471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.656 [WARNING][6471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.656 [INFO][6471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" HandleID="k8s-pod-network.981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-calico--apiserver--548d7dd47c--ftmtc-eth0" Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.657 [INFO][6471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.658975 containerd[1824]: 2025-02-13 21:25:29.658 [INFO][6456] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2" Feb 13 21:25:29.659323 containerd[1824]: time="2025-02-13T21:25:29.659002227Z" level=info msg="TearDown network for sandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\" successfully" Feb 13 21:25:29.676464 containerd[1824]: time="2025-02-13T21:25:29.676405689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:25:29.676464 containerd[1824]: time="2025-02-13T21:25:29.676434162Z" level=info msg="RemovePodSandbox \"981dd8a1062615f0f4d6ae7ad022d9f6e8e0141785c98943fe91be50dccf1db2\" returns successfully" Feb 13 21:25:29.676749 containerd[1824]: time="2025-02-13T21:25:29.676703719Z" level=info msg="StopPodSandbox for \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\"" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.696 [WARNING][6501] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"57a173d5-81af-4ca8-8cbb-e172ae7536f0", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8", Pod:"coredns-6f6b679f8f-8w4xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali394dbb11886", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.696 [INFO][6501] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.696 [INFO][6501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" iface="eth0" netns="" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.696 [INFO][6501] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.696 [INFO][6501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.710 [INFO][6513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.710 [INFO][6513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.710 [INFO][6513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.715 [WARNING][6513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.715 [INFO][6513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.716 [INFO][6513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.718544 containerd[1824]: 2025-02-13 21:25:29.717 [INFO][6501] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.718949 containerd[1824]: time="2025-02-13T21:25:29.718542951Z" level=info msg="TearDown network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\" successfully" Feb 13 21:25:29.718949 containerd[1824]: time="2025-02-13T21:25:29.718564764Z" level=info msg="StopPodSandbox for \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\" returns successfully" Feb 13 21:25:29.718949 containerd[1824]: time="2025-02-13T21:25:29.718906173Z" level=info msg="RemovePodSandbox for \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\"" Feb 13 21:25:29.718949 containerd[1824]: time="2025-02-13T21:25:29.718936099Z" level=info msg="Forcibly stopping sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\"" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.745 [WARNING][6543] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"57a173d5-81af-4ca8-8cbb-e172ae7536f0", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"4909b52c0bb8a32885c95f3654841ded3fe7a5848e066d77fc3a5b14312168c8", Pod:"coredns-6f6b679f8f-8w4xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali394dbb11886", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.745 [INFO][6543] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.745 [INFO][6543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" iface="eth0" netns="" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.745 [INFO][6543] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.745 [INFO][6543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.762 [INFO][6559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.762 [INFO][6559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.762 [INFO][6559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.767 [WARNING][6559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.767 [INFO][6559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" HandleID="k8s-pod-network.95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-coredns--6f6b679f8f--8w4xv-eth0" Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.768 [INFO][6559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.770659 containerd[1824]: 2025-02-13 21:25:29.769 [INFO][6543] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7" Feb 13 21:25:29.771392 containerd[1824]: time="2025-02-13T21:25:29.770693820Z" level=info msg="TearDown network for sandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\" successfully" Feb 13 21:25:29.772529 containerd[1824]: time="2025-02-13T21:25:29.772513811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:25:29.772574 containerd[1824]: time="2025-02-13T21:25:29.772547351Z" level=info msg="RemovePodSandbox \"95106b0628fe07e061528fd9da4daf227c057cde0595fa50c2734cfbf4b4f3c7\" returns successfully" Feb 13 21:25:29.772845 containerd[1824]: time="2025-02-13T21:25:29.772816734Z" level=info msg="StopPodSandbox for \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\"" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.792 [WARNING][6587] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"439af64d-2df4-4687-a7d1-8e9f8ba04da6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56", Pod:"csi-node-driver-w68mc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22df9fcb1ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.792 [INFO][6587] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.792 [INFO][6587] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" iface="eth0" netns="" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.792 [INFO][6587] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.792 [INFO][6587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.804 [INFO][6601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.804 [INFO][6601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.804 [INFO][6601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.808 [WARNING][6601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.808 [INFO][6601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.810 [INFO][6601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.811495 containerd[1824]: 2025-02-13 21:25:29.810 [INFO][6587] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.811838 containerd[1824]: time="2025-02-13T21:25:29.811521262Z" level=info msg="TearDown network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\" successfully" Feb 13 21:25:29.811838 containerd[1824]: time="2025-02-13T21:25:29.811538996Z" level=info msg="StopPodSandbox for \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\" returns successfully" Feb 13 21:25:29.811881 containerd[1824]: time="2025-02-13T21:25:29.811866763Z" level=info msg="RemovePodSandbox for \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\"" Feb 13 21:25:29.811922 containerd[1824]: time="2025-02-13T21:25:29.811884574Z" level=info msg="Forcibly stopping sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\"" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.834 [WARNING][6631] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"439af64d-2df4-4687-a7d1-8e9f8ba04da6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-e8b80a8c0e", ContainerID:"dfe6c30cc21195ddfc6fa0d048e9dbe5f2face9ce27b4cdcd14da51fb4150b56", Pod:"csi-node-driver-w68mc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22df9fcb1ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.834 [INFO][6631] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.834 [INFO][6631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" iface="eth0" netns="" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.834 [INFO][6631] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.834 [INFO][6631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.848 [INFO][6647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.848 [INFO][6647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.848 [INFO][6647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.852 [WARNING][6647] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.852 [INFO][6647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" HandleID="k8s-pod-network.3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Workload="ci--4081.3.1--a--e8b80a8c0e-k8s-csi--node--driver--w68mc-eth0" Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.853 [INFO][6647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:25:29.855132 containerd[1824]: 2025-02-13 21:25:29.854 [INFO][6631] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe" Feb 13 21:25:29.855708 containerd[1824]: time="2025-02-13T21:25:29.855165315Z" level=info msg="TearDown network for sandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\" successfully" Feb 13 21:25:29.867194 containerd[1824]: time="2025-02-13T21:25:29.867179588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:25:29.867234 containerd[1824]: time="2025-02-13T21:25:29.867213587Z" level=info msg="RemovePodSandbox \"3a284a31a6dbbd67f9c4ffdbb4e89409b2856a9fbdc770dbcced7eb6db656fbe\" returns successfully" Feb 13 21:30:11.720815 systemd[1]: Started sshd@9-147.28.180.221:22-139.178.89.65:51574.service - OpenSSH per-connection server daemon (139.178.89.65:51574). Feb 13 21:30:11.763759 sshd[7322]: Accepted publickey for core from 139.178.89.65 port 51574 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:11.765224 sshd[7322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:11.770151 systemd-logind[1806]: New session 12 of user core. Feb 13 21:30:11.783409 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 21:30:11.878446 sshd[7322]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:11.880056 systemd[1]: sshd@9-147.28.180.221:22-139.178.89.65:51574.service: Deactivated successfully. Feb 13 21:30:11.880997 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 21:30:11.881783 systemd-logind[1806]: Session 12 logged out. Waiting for processes to exit. Feb 13 21:30:11.882379 systemd-logind[1806]: Removed session 12. Feb 13 21:30:16.896035 systemd[1]: Started sshd@10-147.28.180.221:22-139.178.89.65:47228.service - OpenSSH per-connection server daemon (139.178.89.65:47228). Feb 13 21:30:16.927730 sshd[7356]: Accepted publickey for core from 139.178.89.65 port 47228 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:16.928646 sshd[7356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:16.931826 systemd-logind[1806]: New session 13 of user core. Feb 13 21:30:16.945233 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 21:30:17.031427 sshd[7356]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:17.032971 systemd[1]: sshd@10-147.28.180.221:22-139.178.89.65:47228.service: Deactivated successfully. Feb 13 21:30:17.033922 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 21:30:17.034667 systemd-logind[1806]: Session 13 logged out. Waiting for processes to exit. Feb 13 21:30:17.035269 systemd-logind[1806]: Removed session 13. Feb 13 21:30:22.051703 systemd[1]: Started sshd@11-147.28.180.221:22-139.178.89.65:47234.service - OpenSSH per-connection server daemon (139.178.89.65:47234). Feb 13 21:30:22.084265 sshd[7403]: Accepted publickey for core from 139.178.89.65 port 47234 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:22.085359 sshd[7403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:22.089532 systemd-logind[1806]: New session 14 of user core. Feb 13 21:30:22.109572 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 21:30:22.205707 sshd[7403]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:22.225492 systemd[1]: sshd@11-147.28.180.221:22-139.178.89.65:47234.service: Deactivated successfully. Feb 13 21:30:22.229471 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 21:30:22.232886 systemd-logind[1806]: Session 14 logged out. Waiting for processes to exit. Feb 13 21:30:22.243822 systemd[1]: Started sshd@12-147.28.180.221:22-139.178.89.65:47246.service - OpenSSH per-connection server daemon (139.178.89.65:47246). Feb 13 21:30:22.246339 systemd-logind[1806]: Removed session 14. Feb 13 21:30:22.319370 sshd[7430]: Accepted publickey for core from 139.178.89.65 port 47246 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:22.323433 sshd[7430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:22.335081 systemd-logind[1806]: New session 15 of user core. Feb 13 21:30:22.344500 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 21:30:22.490223 sshd[7430]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:22.503861 systemd[1]: sshd@12-147.28.180.221:22-139.178.89.65:47246.service: Deactivated successfully. Feb 13 21:30:22.504734 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 21:30:22.505473 systemd-logind[1806]: Session 15 logged out. Waiting for processes to exit. Feb 13 21:30:22.506140 systemd[1]: Started sshd@13-147.28.180.221:22-139.178.89.65:47256.service - OpenSSH per-connection server daemon (139.178.89.65:47256). Feb 13 21:30:22.506628 systemd-logind[1806]: Removed session 15. Feb 13 21:30:22.539206 sshd[7456]: Accepted publickey for core from 139.178.89.65 port 47256 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:22.542616 sshd[7456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:22.554306 systemd-logind[1806]: New session 16 of user core. Feb 13 21:30:22.577544 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 21:30:22.672908 sshd[7456]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:22.674503 systemd[1]: sshd@13-147.28.180.221:22-139.178.89.65:47256.service: Deactivated successfully. Feb 13 21:30:22.675392 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 21:30:22.676080 systemd-logind[1806]: Session 16 logged out. Waiting for processes to exit. Feb 13 21:30:22.676781 systemd-logind[1806]: Removed session 16. Feb 13 21:30:27.703061 systemd[1]: Started sshd@14-147.28.180.221:22-139.178.89.65:53000.service - OpenSSH per-connection server daemon (139.178.89.65:53000). Feb 13 21:30:27.735458 sshd[7485]: Accepted publickey for core from 139.178.89.65 port 53000 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:27.738939 sshd[7485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:27.750131 systemd-logind[1806]: New session 17 of user core. Feb 13 21:30:27.766672 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 21:30:27.864926 sshd[7485]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:27.866375 systemd[1]: sshd@14-147.28.180.221:22-139.178.89.65:53000.service: Deactivated successfully. Feb 13 21:30:27.867272 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 21:30:27.868002 systemd-logind[1806]: Session 17 logged out. Waiting for processes to exit. Feb 13 21:30:27.868737 systemd-logind[1806]: Removed session 17. Feb 13 21:30:32.894305 systemd[1]: Started sshd@15-147.28.180.221:22-139.178.89.65:53008.service - OpenSSH per-connection server daemon (139.178.89.65:53008). Feb 13 21:30:32.924895 sshd[7514]: Accepted publickey for core from 139.178.89.65 port 53008 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:32.928357 sshd[7514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:32.939582 systemd-logind[1806]: New session 18 of user core. Feb 13 21:30:32.953530 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 21:30:33.050847 sshd[7514]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:33.053163 systemd[1]: sshd@15-147.28.180.221:22-139.178.89.65:53008.service: Deactivated successfully. Feb 13 21:30:33.054102 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 21:30:33.054624 systemd-logind[1806]: Session 18 logged out. Waiting for processes to exit. Feb 13 21:30:33.055188 systemd-logind[1806]: Removed session 18. Feb 13 21:30:38.089332 systemd[1]: Started sshd@16-147.28.180.221:22-139.178.89.65:51516.service - OpenSSH per-connection server daemon (139.178.89.65:51516). Feb 13 21:30:38.119862 sshd[7544]: Accepted publickey for core from 139.178.89.65 port 51516 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:38.120742 sshd[7544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:38.124183 systemd-logind[1806]: New session 19 of user core. Feb 13 21:30:38.142408 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 21:30:38.230088 sshd[7544]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:38.246908 systemd[1]: sshd@16-147.28.180.221:22-139.178.89.65:51516.service: Deactivated successfully. Feb 13 21:30:38.247786 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 21:30:38.248607 systemd-logind[1806]: Session 19 logged out. Waiting for processes to exit. Feb 13 21:30:38.249261 systemd[1]: Started sshd@17-147.28.180.221:22-139.178.89.65:51520.service - OpenSSH per-connection server daemon (139.178.89.65:51520). Feb 13 21:30:38.249844 systemd-logind[1806]: Removed session 19. Feb 13 21:30:38.284668 sshd[7569]: Accepted publickey for core from 139.178.89.65 port 51520 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:38.285930 sshd[7569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:38.290960 systemd-logind[1806]: New session 20 of user core. Feb 13 21:30:38.307602 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 21:30:38.469237 sshd[7569]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:38.485347 systemd[1]: sshd@17-147.28.180.221:22-139.178.89.65:51520.service: Deactivated successfully. Feb 13 21:30:38.486479 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 21:30:38.487459 systemd-logind[1806]: Session 20 logged out. Waiting for processes to exit. Feb 13 21:30:38.488403 systemd[1]: Started sshd@18-147.28.180.221:22-139.178.89.65:51532.service - OpenSSH per-connection server daemon (139.178.89.65:51532). Feb 13 21:30:38.488998 systemd-logind[1806]: Removed session 20. Feb 13 21:30:38.530491 sshd[7593]: Accepted publickey for core from 139.178.89.65 port 51532 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:38.532471 sshd[7593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:38.540777 systemd-logind[1806]: New session 21 of user core. Feb 13 21:30:38.560547 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 21:30:39.728259 sshd[7593]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:39.749767 systemd[1]: sshd@18-147.28.180.221:22-139.178.89.65:51532.service: Deactivated successfully. Feb 13 21:30:39.754093 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 21:30:39.757782 systemd-logind[1806]: Session 21 logged out. Waiting for processes to exit. Feb 13 21:30:39.778134 systemd[1]: Started sshd@19-147.28.180.221:22-139.178.89.65:51534.service - OpenSSH per-connection server daemon (139.178.89.65:51534). Feb 13 21:30:39.781136 systemd-logind[1806]: Removed session 21. Feb 13 21:30:39.829332 sshd[7654]: Accepted publickey for core from 139.178.89.65 port 51534 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:39.830884 sshd[7654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:39.835894 systemd-logind[1806]: New session 22 of user core. Feb 13 21:30:39.859502 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 21:30:40.084826 sshd[7654]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:40.094689 systemd[1]: sshd@19-147.28.180.221:22-139.178.89.65:51534.service: Deactivated successfully. Feb 13 21:30:40.095468 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 21:30:40.096181 systemd-logind[1806]: Session 22 logged out. Waiting for processes to exit. Feb 13 21:30:40.096755 systemd[1]: Started sshd@20-147.28.180.221:22-139.178.89.65:51550.service - OpenSSH per-connection server daemon (139.178.89.65:51550). Feb 13 21:30:40.097128 systemd-logind[1806]: Removed session 22. Feb 13 21:30:40.129509 sshd[7679]: Accepted publickey for core from 139.178.89.65 port 51550 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:40.132993 sshd[7679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:40.144423 systemd-logind[1806]: New session 23 of user core. Feb 13 21:30:40.165825 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 21:30:40.284454 sshd[7679]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:40.285967 systemd[1]: sshd@20-147.28.180.221:22-139.178.89.65:51550.service: Deactivated successfully. Feb 13 21:30:40.286841 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 21:30:40.287503 systemd-logind[1806]: Session 23 logged out. Waiting for processes to exit. Feb 13 21:30:40.288001 systemd-logind[1806]: Removed session 23. Feb 13 21:30:45.327437 systemd[1]: Started sshd@21-147.28.180.221:22-139.178.89.65:38526.service - OpenSSH per-connection server daemon (139.178.89.65:38526). Feb 13 21:30:45.359793 sshd[7710]: Accepted publickey for core from 139.178.89.65 port 38526 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:45.363285 sshd[7710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:45.374619 systemd-logind[1806]: New session 24 of user core. Feb 13 21:30:45.390573 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 21:30:45.485276 sshd[7710]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:45.486951 systemd[1]: sshd@21-147.28.180.221:22-139.178.89.65:38526.service: Deactivated successfully. Feb 13 21:30:45.487855 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 21:30:45.488607 systemd-logind[1806]: Session 24 logged out. Waiting for processes to exit. Feb 13 21:30:45.489199 systemd-logind[1806]: Removed session 24. Feb 13 21:30:50.511425 systemd[1]: Started sshd@22-147.28.180.221:22-139.178.89.65:38528.service - OpenSSH per-connection server daemon (139.178.89.65:38528). Feb 13 21:30:50.542916 sshd[7754]: Accepted publickey for core from 139.178.89.65 port 38528 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:50.544074 sshd[7754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:50.548468 systemd-logind[1806]: New session 25 of user core. Feb 13 21:30:50.559367 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 21:30:50.647452 sshd[7754]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:50.649375 systemd[1]: sshd@22-147.28.180.221:22-139.178.89.65:38528.service: Deactivated successfully. Feb 13 21:30:50.650290 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 21:30:50.650680 systemd-logind[1806]: Session 25 logged out. Waiting for processes to exit. Feb 13 21:30:50.651274 systemd-logind[1806]: Removed session 25. Feb 13 21:30:55.663907 systemd[1]: Started sshd@23-147.28.180.221:22-139.178.89.65:35806.service - OpenSSH per-connection server daemon (139.178.89.65:35806). Feb 13 21:30:55.695902 sshd[7781]: Accepted publickey for core from 139.178.89.65 port 35806 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:30:55.696779 sshd[7781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:30:55.700058 systemd-logind[1806]: New session 26 of user core. Feb 13 21:30:55.719392 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 21:30:55.806415 sshd[7781]: pam_unix(sshd:session): session closed for user core Feb 13 21:30:55.808019 systemd[1]: sshd@23-147.28.180.221:22-139.178.89.65:35806.service: Deactivated successfully. Feb 13 21:30:55.808972 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 21:30:55.809724 systemd-logind[1806]: Session 26 logged out. Waiting for processes to exit. Feb 13 21:30:55.810271 systemd-logind[1806]: Removed session 26. Feb 13 21:31:00.839833 systemd[1]: Started sshd@24-147.28.180.221:22-139.178.89.65:35820.service - OpenSSH per-connection server daemon (139.178.89.65:35820). Feb 13 21:31:00.873858 sshd[7805]: Accepted publickey for core from 139.178.89.65 port 35820 ssh2: RSA SHA256:6ByWF9I+QbePXoVE/Ooa8KUw2dPGq3Qvw04/G+Sn80U Feb 13 21:31:00.874682 sshd[7805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:31:00.877815 systemd-logind[1806]: New session 27 of user core. Feb 13 21:31:00.902686 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 21:31:01.001207 sshd[7805]: pam_unix(sshd:session): session closed for user core Feb 13 21:31:01.007863 systemd[1]: sshd@24-147.28.180.221:22-139.178.89.65:35820.service: Deactivated successfully. Feb 13 21:31:01.012016 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 21:31:01.015408 systemd-logind[1806]: Session 27 logged out. Waiting for processes to exit. Feb 13 21:31:01.018293 systemd-logind[1806]: Removed session 27.