Jan 13 22:24:36.999124 kernel: microcode: updated early: 0xde -> 0xfc, date = 2023-07-27 Jan 13 22:24:36.999138 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 22:24:36.999144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:24:36.999150 kernel: BIOS-provided physical RAM map: Jan 13 22:24:36.999153 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 22:24:36.999157 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 22:24:36.999162 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 22:24:36.999166 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 22:24:36.999170 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 22:24:36.999174 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000620bafff] usable Jan 13 22:24:36.999178 kernel: BIOS-e820: [mem 0x00000000620bb000-0x00000000620bbfff] ACPI NVS Jan 13 22:24:36.999183 kernel: BIOS-e820: [mem 0x00000000620bc000-0x00000000620bcfff] reserved Jan 13 22:24:36.999187 kernel: BIOS-e820: [mem 0x00000000620bd000-0x000000006c0c4fff] usable Jan 13 22:24:36.999191 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Jan 13 22:24:36.999197 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Jan 13 22:24:36.999201 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Jan 13 22:24:36.999207 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Jan 13 22:24:36.999211 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Jan 13 22:24:36.999216 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Jan 13 22:24:36.999220 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 22:24:36.999225 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 22:24:36.999229 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 22:24:36.999234 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 22:24:36.999238 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 22:24:36.999243 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Jan 13 22:24:36.999247 kernel: NX (Execute Disable) protection: active Jan 13 22:24:36.999252 kernel: APIC: Static calls initialized Jan 13 22:24:36.999258 kernel: SMBIOS 3.2.1 present. Jan 13 22:24:36.999262 kernel: DMI: Supermicro X11SCH-F/X11SCH-F, BIOS 1.5 11/17/2020 Jan 13 22:24:36.999267 kernel: tsc: Detected 3400.000 MHz processor Jan 13 22:24:36.999271 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 22:24:36.999276 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 22:24:36.999281 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 22:24:36.999286 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Jan 13 22:24:36.999291 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 22:24:36.999296 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 22:24:36.999301 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Jan 13 22:24:36.999306 kernel: Using GB pages for direct mapping Jan 13 22:24:36.999311 kernel: ACPI: Early table checksum verification disabled Jan 13 22:24:36.999316 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 22:24:36.999323 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 22:24:36.999328 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Jan 13 22:24:36.999333 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 22:24:36.999339 kernel: ACPI: FACS 0x000000006D762F80 000040 Jan 13 22:24:36.999344 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Jan 13 22:24:36.999349 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Jan 13 22:24:36.999354 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 22:24:36.999359 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 22:24:36.999364 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 22:24:36.999369 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 22:24:36.999374 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 22:24:36.999380 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 22:24:36.999385 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999390 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 22:24:36.999395 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 22:24:36.999400 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999405 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999410 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 22:24:36.999415 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 22:24:36.999421 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999426 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999431 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 22:24:36.999436 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 13 22:24:36.999441 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 22:24:36.999446 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 22:24:36.999451 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 22:24:36.999456 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 ?b 01072009 AMI 00010013) Jan 13 22:24:36.999461 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 22:24:36.999467 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 22:24:36.999472 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 22:24:36.999477 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 22:24:36.999482 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 22:24:36.999487 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Jan 13 22:24:36.999492 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Jan 13 22:24:36.999497 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Jan 13 22:24:36.999502 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Jan 13 22:24:36.999507 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Jan 13 22:24:36.999513 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Jan 13 22:24:36.999518 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Jan 13 22:24:36.999523 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Jan 13 22:24:36.999528 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Jan 13 22:24:36.999533 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Jan 13 22:24:36.999538 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Jan 13 22:24:36.999543 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Jan 13 22:24:36.999548 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Jan 13 22:24:36.999552 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Jan 13 22:24:36.999558 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Jan 13 22:24:36.999563 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Jan 13 22:24:36.999568 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Jan 13 22:24:36.999573 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Jan 13 22:24:36.999578 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Jan 13 22:24:36.999583 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Jan 13 22:24:36.999588 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Jan 13 22:24:36.999593 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Jan 13 22:24:36.999598 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Jan 13 22:24:36.999604 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Jan 13 22:24:36.999609 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Jan 13 22:24:36.999613 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Jan 13 22:24:36.999618 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Jan 13 22:24:36.999623 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Jan 13 22:24:36.999628 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Jan 13 22:24:36.999633 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Jan 13 22:24:36.999638 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Jan 13 22:24:36.999643 kernel: No NUMA configuration found Jan 13 22:24:36.999649 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Jan 13 22:24:36.999655 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Jan 13 22:24:36.999660 kernel: Zone ranges: Jan 13 22:24:36.999665 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 22:24:36.999670 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 22:24:36.999675 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Jan 13 22:24:36.999680 kernel: Movable zone start for each node Jan 13 22:24:36.999685 kernel: Early memory node ranges Jan 13 22:24:36.999690 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 22:24:36.999695 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 22:24:36.999701 kernel: node 0: [mem 0x0000000040400000-0x00000000620bafff] Jan 13 22:24:36.999706 kernel: node 0: [mem 0x00000000620bd000-0x000000006c0c4fff] Jan 13 22:24:36.999710 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Jan 13 22:24:36.999716 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Jan 13 22:24:36.999725 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Jan 13 22:24:36.999730 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Jan 13 22:24:36.999735 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 22:24:36.999741 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 22:24:36.999747 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 22:24:36.999752 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 22:24:36.999757 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 13 22:24:36.999765 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 13 22:24:36.999770 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Jan 13 22:24:36.999776 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 22:24:36.999781 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 22:24:36.999787 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 22:24:36.999792 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 22:24:36.999798 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 22:24:36.999804 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 22:24:36.999809 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 22:24:36.999814 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 22:24:36.999820 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 22:24:36.999825 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 22:24:36.999830 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 22:24:36.999835 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 22:24:36.999841 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 22:24:36.999847 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 22:24:36.999852 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 22:24:36.999858 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 22:24:36.999863 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 22:24:36.999868 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 22:24:36.999874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 22:24:36.999879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 22:24:36.999884 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 22:24:36.999890 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 22:24:36.999896 kernel: TSC deadline timer available Jan 13 22:24:36.999902 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 22:24:36.999907 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Jan 13 22:24:36.999912 kernel: Booting paravirtualized kernel on bare hardware Jan 13 22:24:36.999918 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 22:24:36.999923 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 22:24:36.999929 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 22:24:36.999934 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 22:24:36.999939 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 22:24:36.999946 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:24:36.999952 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 22:24:36.999957 kernel: random: crng init done Jan 13 22:24:36.999962 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 22:24:36.999968 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 22:24:36.999973 kernel: Fallback order for Node 0: 0 Jan 13 22:24:36.999978 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Jan 13 22:24:36.999984 kernel: Policy zone: Normal Jan 13 22:24:36.999990 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 22:24:36.999995 kernel: software IO TLB: area num 16. Jan 13 22:24:37.000001 kernel: Memory: 32551316K/33281940K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 730364K reserved, 0K cma-reserved) Jan 13 22:24:37.000006 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 22:24:37.000012 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 22:24:37.000017 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 22:24:37.000022 kernel: Dynamic Preempt: voluntary Jan 13 22:24:37.000028 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 22:24:37.000033 kernel: rcu: RCU event tracing is enabled. Jan 13 22:24:37.000040 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 22:24:37.000045 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 22:24:37.000051 kernel: Rude variant of Tasks RCU enabled. Jan 13 22:24:37.000056 kernel: Tracing variant of Tasks RCU enabled. Jan 13 22:24:37.000061 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 22:24:37.000067 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 22:24:37.000072 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 22:24:37.000077 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 22:24:37.000083 kernel: Console: colour dummy device 80x25 Jan 13 22:24:37.000089 kernel: printk: console [tty0] enabled Jan 13 22:24:37.000094 kernel: printk: console [ttyS1] enabled Jan 13 22:24:37.000100 kernel: ACPI: Core revision 20230628 Jan 13 22:24:37.000105 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 13 22:24:37.000111 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 22:24:37.000116 kernel: DMAR: Host address width 39 Jan 13 22:24:37.000121 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 13 22:24:37.000127 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 13 22:24:37.000132 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 22:24:37.000137 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 22:24:37.000144 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Jan 13 22:24:37.000149 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Jan 13 22:24:37.000154 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 13 22:24:37.000159 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 22:24:37.000165 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 22:24:37.000170 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 22:24:37.000176 kernel: x2apic enabled Jan 13 22:24:37.000181 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 22:24:37.000186 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 22:24:37.000193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 22:24:37.000198 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 22:24:37.000204 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 22:24:37.000209 kernel: process: using mwait in idle threads Jan 13 22:24:37.000214 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 22:24:37.000220 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 22:24:37.000225 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 22:24:37.000230 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 22:24:37.000237 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 22:24:37.000242 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 22:24:37.000247 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 22:24:37.000253 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 22:24:37.000258 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 22:24:37.000264 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 22:24:37.000269 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 22:24:37.000274 kernel: TAA: Mitigation: TSX disabled Jan 13 22:24:37.000280 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 22:24:37.000286 kernel: SRBDS: Mitigation: Microcode Jan 13 22:24:37.000291 kernel: GDS: Mitigation: Microcode Jan 13 22:24:37.000297 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 22:24:37.000302 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 22:24:37.000307 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 22:24:37.000312 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 22:24:37.000318 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 22:24:37.000323 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 22:24:37.000328 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 22:24:37.000335 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 22:24:37.000340 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 22:24:37.000345 kernel: Freeing SMP alternatives memory: 32K Jan 13 22:24:37.000351 kernel: pid_max: default: 32768 minimum: 301 Jan 13 22:24:37.000356 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 22:24:37.000361 kernel: landlock: Up and running. Jan 13 22:24:37.000367 kernel: SELinux: Initializing. Jan 13 22:24:37.000372 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.000377 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.000384 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 22:24:37.000389 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:24:37.000394 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:24:37.000400 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:24:37.000405 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 22:24:37.000411 kernel: ... version: 4 Jan 13 22:24:37.000416 kernel: ... bit width: 48 Jan 13 22:24:37.000421 kernel: ... generic registers: 4 Jan 13 22:24:37.000426 kernel: ... value mask: 0000ffffffffffff Jan 13 22:24:37.000433 kernel: ... max period: 00007fffffffffff Jan 13 22:24:37.000438 kernel: ... fixed-purpose events: 3 Jan 13 22:24:37.000443 kernel: ... event mask: 000000070000000f Jan 13 22:24:37.000449 kernel: signal: max sigframe size: 2032 Jan 13 22:24:37.000454 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 22:24:37.000459 kernel: rcu: Hierarchical SRCU implementation. Jan 13 22:24:37.000465 kernel: rcu: Max phase no-delay instances is 400. Jan 13 22:24:37.000470 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 22:24:37.000475 kernel: smp: Bringing up secondary CPUs ... Jan 13 22:24:37.000482 kernel: smpboot: x86: Booting SMP configuration: Jan 13 22:24:37.000487 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 22:24:37.000493 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 22:24:37.000498 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 22:24:37.000504 kernel: smpboot: Max logical packages: 1 Jan 13 22:24:37.000509 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 22:24:37.000514 kernel: devtmpfs: initialized Jan 13 22:24:37.000520 kernel: x86/mm: Memory block size: 128MB Jan 13 22:24:37.000525 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x620bb000-0x620bbfff] (4096 bytes) Jan 13 22:24:37.000531 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Jan 13 22:24:37.000537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 22:24:37.000542 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 22:24:37.000547 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 22:24:37.000553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 22:24:37.000558 kernel: audit: initializing netlink subsys (disabled) Jan 13 22:24:37.000564 kernel: audit: type=2000 audit(1736807071.112:1): state=initialized audit_enabled=0 res=1 Jan 13 22:24:37.000569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 22:24:37.000575 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 22:24:37.000580 kernel: cpuidle: using governor menu Jan 13 22:24:37.000586 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 22:24:37.000591 kernel: dca service started, version 1.12.1 Jan 13 22:24:37.000596 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 22:24:37.000602 kernel: PCI: Using configuration type 1 for base access Jan 13 22:24:37.000607 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 22:24:37.000612 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 22:24:37.000618 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 22:24:37.000624 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 22:24:37.000629 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 22:24:37.000635 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 22:24:37.000640 kernel: ACPI: Added _OSI(Module Device) Jan 13 22:24:37.000645 kernel: ACPI: Added _OSI(Processor Device) Jan 13 22:24:37.000651 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 22:24:37.000656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 22:24:37.000661 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 22:24:37.000667 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000673 kernel: ACPI: SSDT 0xFFFF918F01CFF400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 22:24:37.000678 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000684 kernel: ACPI: SSDT 0xFFFF918F01CEB800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 22:24:37.000689 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000694 kernel: ACPI: SSDT 0xFFFF918F0024E200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 22:24:37.000699 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000705 kernel: ACPI: SSDT 0xFFFF918F01CE9800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 22:24:37.000710 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000715 kernel: ACPI: SSDT 0xFFFF918F0012F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 22:24:37.000720 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000727 kernel: ACPI: SSDT 0xFFFF918F01CFC400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 22:24:37.000732 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 22:24:37.000737 kernel: ACPI: Interpreter enabled Jan 13 22:24:37.000743 kernel: ACPI: PM: (supports S0 S5) Jan 13 22:24:37.000748 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 22:24:37.000753 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 22:24:37.000759 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 22:24:37.000766 kernel: HEST: Table parsing has been initialized. Jan 13 22:24:37.000771 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 22:24:37.000795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 22:24:37.000801 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 22:24:37.000820 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 22:24:37.000826 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 22:24:37.000831 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 22:24:37.000836 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 22:24:37.000842 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 22:24:37.000847 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 22:24:37.000853 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 13 22:24:37.000859 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 22:24:37.000864 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 22:24:37.000870 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 22:24:37.000875 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 22:24:37.000880 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 22:24:37.000886 kernel: ACPI: \PIN_: New power resource Jan 13 22:24:37.000891 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 22:24:37.000961 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 22:24:37.001015 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 22:24:37.001062 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 22:24:37.001070 kernel: PCI host bridge to bus 0000:00 Jan 13 22:24:37.001118 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 22:24:37.001161 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 22:24:37.001201 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 22:24:37.001242 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Jan 13 22:24:37.001285 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 22:24:37.001325 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 22:24:37.001382 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 22:24:37.001435 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 22:24:37.001483 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.001533 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 13 22:24:37.001583 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.001631 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 13 22:24:37.001678 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Jan 13 22:24:37.001723 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 13 22:24:37.001772 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 13 22:24:37.001822 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 22:24:37.001869 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Jan 13 22:24:37.001920 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 22:24:37.001968 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Jan 13 22:24:37.002021 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 22:24:37.002068 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Jan 13 22:24:37.002115 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 22:24:37.002173 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 22:24:37.002223 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Jan 13 22:24:37.002269 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Jan 13 22:24:37.002318 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 22:24:37.002365 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:24:37.002416 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 22:24:37.002462 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:24:37.002515 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 22:24:37.002561 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Jan 13 22:24:37.002607 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 22:24:37.002656 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 22:24:37.002704 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Jan 13 22:24:37.002749 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 22:24:37.002871 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 22:24:37.002921 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Jan 13 22:24:37.002967 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 22:24:37.003016 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 22:24:37.003063 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Jan 13 22:24:37.003111 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Jan 13 22:24:37.003158 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 13 22:24:37.003203 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 13 22:24:37.003250 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 13 22:24:37.003295 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Jan 13 22:24:37.003342 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 22:24:37.003392 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 22:24:37.003442 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003492 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 22:24:37.003542 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003595 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 22:24:37.003641 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003693 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 22:24:37.003742 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003799 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 13 22:24:37.003847 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003898 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 22:24:37.003944 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:24:37.003994 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 22:24:37.004046 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 22:24:37.004093 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Jan 13 22:24:37.004138 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 22:24:37.004188 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 22:24:37.004234 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 22:24:37.004283 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:24:37.004339 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 22:24:37.004387 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 22:24:37.004435 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Jan 13 22:24:37.004483 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 13 22:24:37.004531 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:24:37.004578 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:24:37.004630 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 22:24:37.004680 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 22:24:37.004728 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Jan 13 22:24:37.004778 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 13 22:24:37.004825 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:24:37.004874 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:24:37.004921 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 13 22:24:37.004969 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 13 22:24:37.005018 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:24:37.005066 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 13 22:24:37.005120 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 22:24:37.005169 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:24:37.005216 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Jan 13 22:24:37.005263 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 22:24:37.005311 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Jan 13 22:24:37.005358 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.005408 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 13 22:24:37.005454 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:24:37.005502 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 13 22:24:37.005555 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 13 22:24:37.005604 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:24:37.005653 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Jan 13 22:24:37.005701 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 22:24:37.005752 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Jan 13 22:24:37.005802 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.005850 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 13 22:24:37.005897 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:24:37.005944 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 13 22:24:37.005990 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 13 22:24:37.006043 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 22:24:37.006091 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 13 22:24:37.006142 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 22:24:37.006190 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:24:37.006237 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 13 22:24:37.006284 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.006330 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.006381 kernel: pci_bus 0000:08: extended config space not accessible Jan 13 22:24:37.006434 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 22:24:37.006488 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Jan 13 22:24:37.006537 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Jan 13 22:24:37.006587 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 22:24:37.006637 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 22:24:37.006686 kernel: pci 0000:08:00.0: supports D1 D2 Jan 13 22:24:37.006736 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:24:37.006791 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 13 22:24:37.006841 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.006890 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.006899 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 22:24:37.006905 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 22:24:37.006910 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 22:24:37.006916 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 22:24:37.006922 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 22:24:37.006928 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 22:24:37.006933 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 22:24:37.006940 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 22:24:37.006946 kernel: iommu: Default domain type: Translated Jan 13 22:24:37.006952 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 22:24:37.006957 kernel: PCI: Using ACPI for IRQ routing Jan 13 22:24:37.006963 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 22:24:37.006969 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 22:24:37.006974 kernel: e820: reserve RAM buffer [mem 0x620bb000-0x63ffffff] Jan 13 22:24:37.006980 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Jan 13 22:24:37.006985 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Jan 13 22:24:37.006992 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Jan 13 22:24:37.007040 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 13 22:24:37.007090 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 13 22:24:37.007140 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 22:24:37.007148 kernel: vgaarb: loaded Jan 13 22:24:37.007154 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 22:24:37.007159 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 13 22:24:37.007165 kernel: clocksource: Switched to clocksource tsc-early Jan 13 22:24:37.007171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 22:24:37.007178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 22:24:37.007184 kernel: pnp: PnP ACPI init Jan 13 22:24:37.007235 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 22:24:37.007282 kernel: pnp 00:02: [dma 0 disabled] Jan 13 22:24:37.007329 kernel: pnp 00:03: [dma 0 disabled] Jan 13 22:24:37.007374 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 22:24:37.007419 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 22:24:37.007465 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 22:24:37.007511 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 22:24:37.007554 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 22:24:37.007596 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 22:24:37.007639 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 22:24:37.007681 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 22:24:37.007727 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 22:24:37.007774 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 22:24:37.007818 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 22:24:37.007869 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 22:24:37.007912 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 22:24:37.007954 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 22:24:37.007997 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 22:24:37.008041 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 22:24:37.008083 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 22:24:37.008125 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 22:24:37.008171 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 22:24:37.008180 kernel: pnp: PnP ACPI: found 10 devices Jan 13 22:24:37.008186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 22:24:37.008192 kernel: NET: Registered PF_INET protocol family Jan 13 22:24:37.008199 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:24:37.008205 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 22:24:37.008211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 22:24:37.008216 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:24:37.008222 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 22:24:37.008228 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 22:24:37.008234 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.008239 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.008245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 22:24:37.008252 kernel: NET: Registered PF_XDP protocol family Jan 13 22:24:37.008299 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Jan 13 22:24:37.008347 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Jan 13 22:24:37.008394 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Jan 13 22:24:37.008441 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:24:37.008493 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008540 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008589 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008636 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008684 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 13 22:24:37.008730 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 13 22:24:37.008782 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:24:37.008829 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 13 22:24:37.008878 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 13 22:24:37.008925 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:24:37.008972 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 13 22:24:37.009019 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 13 22:24:37.009066 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:24:37.009113 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 13 22:24:37.009159 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 13 22:24:37.009207 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 13 22:24:37.009257 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.009305 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.009350 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 13 22:24:37.009397 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.009444 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.009486 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 22:24:37.009529 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 22:24:37.009569 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 22:24:37.009611 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 22:24:37.009654 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Jan 13 22:24:37.009696 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 22:24:37.009742 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Jan 13 22:24:37.009790 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:24:37.009837 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 13 22:24:37.009880 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Jan 13 22:24:37.009929 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 13 22:24:37.009973 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Jan 13 22:24:37.010021 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 22:24:37.010065 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.010110 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 13 22:24:37.010155 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.010163 kernel: PCI: CLS 64 bytes, default 64 Jan 13 22:24:37.010170 kernel: DMAR: No ATSR found Jan 13 22:24:37.010176 kernel: DMAR: No SATC found Jan 13 22:24:37.010182 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 13 22:24:37.010188 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 13 22:24:37.010193 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 13 22:24:37.010199 kernel: DMAR: IOMMU feature pasid inconsistent Jan 13 22:24:37.010205 kernel: DMAR: IOMMU feature eafs inconsistent Jan 13 22:24:37.010210 kernel: DMAR: IOMMU feature prs inconsistent Jan 13 22:24:37.010216 kernel: DMAR: IOMMU feature nest inconsistent Jan 13 22:24:37.010223 kernel: DMAR: IOMMU feature mts inconsistent Jan 13 22:24:37.010229 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 13 22:24:37.010235 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 13 22:24:37.010240 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 22:24:37.010246 kernel: DMAR: dmar1: Using Queued invalidation Jan 13 22:24:37.010292 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 13 22:24:37.010340 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 13 22:24:37.010387 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 13 22:24:37.010434 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 13 22:24:37.010483 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 13 22:24:37.010530 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 13 22:24:37.010578 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 13 22:24:37.010624 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 13 22:24:37.010671 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 13 22:24:37.010716 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 13 22:24:37.010764 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 13 22:24:37.010811 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 13 22:24:37.010861 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 13 22:24:37.010907 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 13 22:24:37.010954 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 13 22:24:37.011000 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 13 22:24:37.011048 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 13 22:24:37.011093 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 13 22:24:37.011140 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 13 22:24:37.011187 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 13 22:24:37.011233 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 13 22:24:37.011283 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 13 22:24:37.011329 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 13 22:24:37.011378 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 13 22:24:37.011425 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 13 22:24:37.011473 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 22:24:37.011521 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 13 22:24:37.011570 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 13 22:24:37.011619 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 13 22:24:37.011629 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 22:24:37.011635 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 22:24:37.011641 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Jan 13 22:24:37.011647 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 13 22:24:37.011653 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 22:24:37.011658 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 22:24:37.011664 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 22:24:37.011670 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 13 22:24:37.011721 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 22:24:37.011730 kernel: Initialise system trusted keyrings Jan 13 22:24:37.011736 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 22:24:37.011741 kernel: Key type asymmetric registered Jan 13 22:24:37.011747 kernel: Asymmetric key parser 'x509' registered Jan 13 22:24:37.011753 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 22:24:37.011758 kernel: io scheduler mq-deadline registered Jan 13 22:24:37.011767 kernel: io scheduler kyber registered Jan 13 22:24:37.011773 kernel: io scheduler bfq registered Jan 13 22:24:37.011862 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 13 22:24:37.011908 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 13 22:24:37.011955 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 13 22:24:37.012002 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 13 22:24:37.012049 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 13 22:24:37.012095 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 13 22:24:37.012142 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 13 22:24:37.012196 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 22:24:37.012204 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 22:24:37.012210 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 22:24:37.012216 kernel: pstore: Using crash dump compression: deflate Jan 13 22:24:37.012222 kernel: pstore: Registered erst as persistent store backend Jan 13 22:24:37.012227 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 22:24:37.012233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 22:24:37.012239 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 22:24:37.012246 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 22:24:37.012293 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 22:24:37.012301 kernel: i8042: PNP: No PS/2 controller found. Jan 13 22:24:37.012343 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 22:24:37.012387 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 22:24:37.012430 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T22:24:35 UTC (1736807075) Jan 13 22:24:37.012473 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 22:24:37.012481 kernel: intel_pstate: Intel P-state driver initializing Jan 13 22:24:37.012489 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 22:24:37.012494 kernel: intel_pstate: HWP enabled Jan 13 22:24:37.012500 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 13 22:24:37.012506 kernel: vesafb: scrolling: redraw Jan 13 22:24:37.012511 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 13 22:24:37.012517 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x0000000080541909, using 768k, total 768k Jan 13 22:24:37.012523 kernel: Console: switching to colour frame buffer device 128x48 Jan 13 22:24:37.012528 kernel: fb0: VESA VGA frame buffer device Jan 13 22:24:37.012534 kernel: NET: Registered PF_INET6 protocol family Jan 13 22:24:37.012541 kernel: Segment Routing with IPv6 Jan 13 22:24:37.012546 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 22:24:37.012552 kernel: NET: Registered PF_PACKET protocol family Jan 13 22:24:37.012558 kernel: Key type dns_resolver registered Jan 13 22:24:37.012563 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 22:24:37.012569 kernel: IPI shorthand broadcast: enabled Jan 13 22:24:37.012575 kernel: sched_clock: Marking stable (1720491524, 1390614232)->(4574311019, -1463205263) Jan 13 22:24:37.012580 kernel: registered taskstats version 1 Jan 13 22:24:37.012586 kernel: Loading compiled-in X.509 certificates Jan 13 22:24:37.012592 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 22:24:37.012598 kernel: Key type .fscrypt registered Jan 13 22:24:37.012604 kernel: Key type fscrypt-provisioning registered Jan 13 22:24:37.012609 kernel: ima: Allocated hash algorithm: sha1 Jan 13 22:24:37.012615 kernel: ima: No architecture policies found Jan 13 22:24:37.012621 kernel: clk: Disabling unused clocks Jan 13 22:24:37.012626 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 22:24:37.012632 kernel: Write protecting the kernel read-only data: 36864k Jan 13 22:24:37.012637 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 22:24:37.012644 kernel: Run /init as init process Jan 13 22:24:37.012650 kernel: with arguments: Jan 13 22:24:37.012655 kernel: /init Jan 13 22:24:37.012661 kernel: with environment: Jan 13 22:24:37.012666 kernel: HOME=/ Jan 13 22:24:37.012672 kernel: TERM=linux Jan 13 22:24:37.012678 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 22:24:37.012684 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:24:37.012692 systemd[1]: Detected architecture x86-64. Jan 13 22:24:37.012698 systemd[1]: Running in initrd. Jan 13 22:24:37.012704 systemd[1]: No hostname configured, using default hostname. Jan 13 22:24:37.012710 systemd[1]: Hostname set to . Jan 13 22:24:37.012716 systemd[1]: Initializing machine ID from random generator. Jan 13 22:24:37.012722 systemd[1]: Queued start job for default target initrd.target. Jan 13 22:24:37.012728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:24:37.012735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:24:37.012741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 22:24:37.012747 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:24:37.012753 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 22:24:37.012759 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 22:24:37.012768 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 22:24:37.012774 kernel: tsc: Refined TSC clocksource calibration: 3407.985 MHz Jan 13 22:24:37.012781 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 22:24:37.012787 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc5a980c, max_idle_ns: 440795300013 ns Jan 13 22:24:37.012793 kernel: clocksource: Switched to clocksource tsc Jan 13 22:24:37.012798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:24:37.012804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:24:37.012810 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:24:37.012816 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:24:37.012822 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:24:37.012828 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:24:37.012835 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:24:37.012841 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:24:37.012847 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 22:24:37.012853 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 22:24:37.012859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:24:37.012865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:24:37.012871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:24:37.012877 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:24:37.012884 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 22:24:37.012889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:24:37.012895 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 22:24:37.012901 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 22:24:37.012907 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:24:37.012923 systemd-journald[267]: Collecting audit messages is disabled. Jan 13 22:24:37.012938 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:24:37.012944 systemd-journald[267]: Journal started Jan 13 22:24:37.012957 systemd-journald[267]: Runtime Journal (/run/log/journal/f3d707202c0249c4897a2bc921c8599f) is 8.0M, max 636.6M, 628.6M free. Jan 13 22:24:37.035466 systemd-modules-load[268]: Inserted module 'overlay' Jan 13 22:24:37.057765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:37.077783 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:24:37.087070 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 22:24:37.087222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:24:37.087338 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 22:24:37.130767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 22:24:37.148523 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 13 22:24:37.159998 kernel: Bridge firewalling registered Jan 13 22:24:37.151140 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:24:37.171385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:24:37.196079 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:24:37.217234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:37.238574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:24:37.259362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:24:37.297988 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:24:37.309333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:24:37.309722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:24:37.315311 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:24:37.315621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:24:37.316671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:24:37.326022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:37.337591 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 22:24:37.340819 systemd-resolved[297]: Positive Trust Anchors: Jan 13 22:24:37.340825 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:24:37.340862 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:24:37.343221 systemd-resolved[297]: Defaulting to hostname 'linux'. Jan 13 22:24:37.357996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:24:37.472982 dracut-cmdline[307]: dracut-dracut-053 Jan 13 22:24:37.472982 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:24:37.377961 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:24:37.587793 kernel: SCSI subsystem initialized Jan 13 22:24:37.610793 kernel: Loading iSCSI transport class v2.0-870. Jan 13 22:24:37.633845 kernel: iscsi: registered transport (tcp) Jan 13 22:24:37.664303 kernel: iscsi: registered transport (qla4xxx) Jan 13 22:24:37.664320 kernel: QLogic iSCSI HBA Driver Jan 13 22:24:37.697316 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 22:24:37.719053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 22:24:37.776722 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 22:24:37.776742 kernel: device-mapper: uevent: version 1.0.3 Jan 13 22:24:37.796406 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 22:24:37.853844 kernel: raid6: avx2x4 gen() 53437 MB/s Jan 13 22:24:37.885841 kernel: raid6: avx2x2 gen() 53898 MB/s Jan 13 22:24:37.922191 kernel: raid6: avx2x1 gen() 45243 MB/s Jan 13 22:24:37.922209 kernel: raid6: using algorithm avx2x2 gen() 53898 MB/s Jan 13 22:24:37.969293 kernel: raid6: .... xor() 31409 MB/s, rmw enabled Jan 13 22:24:37.969311 kernel: raid6: using avx2x2 recovery algorithm Jan 13 22:24:38.009793 kernel: xor: automatically using best checksumming function avx Jan 13 22:24:38.122806 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 22:24:38.128160 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:24:38.157093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:24:38.163706 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jan 13 22:24:38.168005 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:24:38.197025 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 22:24:38.247331 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 13 22:24:38.263689 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:24:38.285005 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:24:38.345112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:24:38.402873 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 22:24:38.402890 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 22:24:38.402898 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 22:24:38.427778 kernel: ACPI: bus type USB registered Jan 13 22:24:38.427798 kernel: usbcore: registered new interface driver usbfs Jan 13 22:24:38.442825 kernel: usbcore: registered new interface driver hub Jan 13 22:24:38.457481 kernel: usbcore: registered new device driver usb Jan 13 22:24:38.472660 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 22:24:38.502122 kernel: PTP clock support registered Jan 13 22:24:38.502147 kernel: libata version 3.00 loaded. Jan 13 22:24:38.502162 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 22:24:38.502176 kernel: AES CTR mode by8 optimization enabled Jan 13 22:24:38.496992 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 22:24:39.331832 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 22:24:39.331932 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:24:39.332002 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 13 22:24:39.332064 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 22:24:39.332125 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 22:24:39.332186 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 22:24:39.332246 kernel: scsi host0: ahci Jan 13 22:24:39.332316 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:24:39.332377 kernel: scsi host1: ahci Jan 13 22:24:39.332438 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 22:24:39.332498 kernel: scsi host2: ahci Jan 13 22:24:39.332558 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 22:24:39.332620 kernel: scsi host3: ahci Jan 13 22:24:39.332685 kernel: hub 1-0:1.0: USB hub found Jan 13 22:24:39.332755 kernel: scsi host4: ahci Jan 13 22:24:39.332822 kernel: hub 1-0:1.0: 16 ports detected Jan 13 22:24:39.332882 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 22:24:39.332891 kernel: scsi host5: ahci Jan 13 22:24:39.332948 kernel: scsi host6: ahci Jan 13 22:24:39.333005 kernel: scsi host7: ahci Jan 13 22:24:39.333064 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Jan 13 22:24:39.333074 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Jan 13 22:24:39.333081 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Jan 13 22:24:39.333088 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Jan 13 22:24:39.333096 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Jan 13 22:24:39.333103 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Jan 13 22:24:39.333110 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Jan 13 22:24:39.333118 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Jan 13 22:24:39.333126 kernel: hub 2-0:1.0: USB hub found Jan 13 22:24:39.333188 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 22:24:39.333197 kernel: pps pps0: new PPS source ptp0 Jan 13 22:24:39.333261 kernel: hub 2-0:1.0: 10 ports detected Jan 13 22:24:39.333320 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 13 22:24:39.333388 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 22:24:39.522289 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:24:39.522376 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:24:39.522386 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1c:a6 Jan 13 22:24:39.522454 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 13 22:24:39.522518 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522527 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:24:39.522590 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522599 kernel: pps pps1: new PPS source ptp1 Jan 13 22:24:39.522662 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:24:39.522673 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 13 22:24:39.522738 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522747 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:24:39.522820 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:24:39.522829 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1c:a7 Jan 13 22:24:39.522890 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522899 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 13 22:24:39.522960 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522970 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:24:39.523031 kernel: hub 1-14:1.0: USB hub found Jan 13 22:24:39.523107 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.523115 kernel: hub 1-14:1.0: 4 ports detected Jan 13 22:24:39.523185 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:24:39.523193 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:24:39.523201 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Jan 13 22:24:39.834807 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:24:39.834823 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:24:39.834900 kernel: ata1.00: Features: NCQ-prio Jan 13 22:24:39.834909 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 22:24:39.835018 kernel: ata2.00: Features: NCQ-prio Jan 13 22:24:39.835027 kernel: ata1.00: configured for UDMA/133 Jan 13 22:24:39.835035 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:24:39.835113 kernel: ata2.00: configured for UDMA/133 Jan 13 22:24:39.835121 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:24:40.002533 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 13 22:24:40.002669 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:40.002685 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 13 22:24:40.002803 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:24:40.002909 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:24:40.002924 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:24:40.003054 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 13 22:24:40.003153 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 13 22:24:40.003250 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 22:24:40.003347 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:24:40.003443 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 22:24:40.003539 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:24:40.003554 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 13 22:24:40.003649 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 22:24:40.003665 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 13 22:24:40.003774 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:24:40.003881 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 13 22:24:40.003982 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 13 22:24:40.004080 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 22:24:40.004172 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:24:40.004263 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 22:24:40.004353 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:40.004367 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:24:40.004465 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 22:24:40.004479 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Jan 13 22:24:40.394156 kernel: GPT:9289727 != 937703087 Jan 13 22:24:40.394168 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 22:24:40.394176 kernel: GPT:9289727 != 937703087 Jan 13 22:24:40.394183 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 22:24:40.394191 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:40.394198 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:24:40.394275 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 13 22:24:40.394344 kernel: usbcore: registered new interface driver usbhid Jan 13 22:24:40.394353 kernel: usbhid: USB HID core driver Jan 13 22:24:40.394360 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (547) Jan 13 22:24:40.394368 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (568) Jan 13 22:24:40.394376 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 22:24:40.394383 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:24:40.394446 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 13 22:24:40.394509 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 22:24:40.394580 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 22:24:40.394589 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 22:24:40.394654 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:40.394662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:40.394670 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:24:40.394733 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:38.566591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:24:40.443316 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:39.368195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:24:40.476824 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 13 22:24:39.436663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:24:39.466901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:24:39.467005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:39.495919 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:24:40.477246 disk-uuid[718]: Primary Header is updated. Jan 13 22:24:40.477246 disk-uuid[718]: Secondary Entries is updated. Jan 13 22:24:40.477246 disk-uuid[718]: Secondary Header is updated. Jan 13 22:24:39.533201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 22:24:39.801830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:24:39.802149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:39.819863 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.157963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.179225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:24:40.197533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 22:24:40.248578 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 22:24:40.288389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 22:24:40.309925 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:24:40.320956 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:24:40.339956 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 22:24:40.355847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:24:40.355876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:40.377812 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.424149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.497767 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 13 22:24:40.681480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:40.714075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:24:40.756571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:41.402162 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:41.422590 disk-uuid[719]: The operation has completed successfully. Jan 13 22:24:41.430980 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:41.459352 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 22:24:41.459402 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 22:24:41.497008 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 22:24:41.536866 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 22:24:41.536934 sh[752]: Success Jan 13 22:24:41.572546 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 22:24:41.599251 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 22:24:41.608084 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 22:24:41.676401 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 22:24:41.676422 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:41.698812 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 22:24:41.718879 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 22:24:41.737862 kernel: BTRFS info (device dm-0): using free space tree Jan 13 22:24:41.777807 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 22:24:41.780731 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 22:24:41.789297 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 22:24:41.794921 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 22:24:41.909278 kernel: BTRFS info (device sda6): first mount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:41.909364 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:41.909373 kernel: BTRFS info (device sda6): using free space tree Jan 13 22:24:41.909380 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 22:24:41.909388 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 22:24:41.830267 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 22:24:41.945872 kernel: BTRFS info (device sda6): last unmount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:41.938981 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 22:24:41.968967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 22:24:41.984352 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:24:42.015946 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:24:42.033548 ignition[862]: Ignition 2.19.0 Jan 13 22:24:42.026975 systemd-networkd[936]: lo: Link UP Jan 13 22:24:42.033552 ignition[862]: Stage: fetch-offline Jan 13 22:24:42.026977 systemd-networkd[936]: lo: Gained carrier Jan 13 22:24:42.033570 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:42.029431 systemd-networkd[936]: Enumeration completed Jan 13 22:24:42.033575 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:42.029476 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:24:42.033626 ignition[862]: parsed url from cmdline: "" Jan 13 22:24:42.030085 systemd-networkd[936]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.033628 ignition[862]: no config URL provided Jan 13 22:24:42.035602 unknown[862]: fetched base config from "system" Jan 13 22:24:42.033630 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:24:42.035606 unknown[862]: fetched user config from "system" Jan 13 22:24:42.033652 ignition[862]: parsing config with SHA512: eb8e87c588681bb665361d414a9d4588495cca98e352b13128de1729d105d8a4bfe5c2a9188462373542cc5376e15bf67561f7568a4c767b5337695889f8d931 Jan 13 22:24:42.046155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:24:42.036573 ignition[862]: fetch-offline: fetch-offline passed Jan 13 22:24:42.059625 systemd-networkd[936]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.036578 ignition[862]: POST message to Packet Timeline Jan 13 22:24:42.063139 systemd[1]: Reached target network.target - Network. Jan 13 22:24:42.036582 ignition[862]: POST Status error: resource requires networking Jan 13 22:24:42.077918 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 22:24:42.036668 ignition[862]: Ignition finished successfully Jan 13 22:24:42.273855 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 13 22:24:42.084980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 22:24:42.105493 ignition[952]: Ignition 2.19.0 Jan 13 22:24:42.088039 systemd-networkd[936]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.105502 ignition[952]: Stage: kargs Jan 13 22:24:42.268018 systemd-networkd[936]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.105715 ignition[952]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:42.105729 ignition[952]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:42.106885 ignition[952]: kargs: kargs passed Jan 13 22:24:42.106891 ignition[952]: POST message to Packet Timeline Jan 13 22:24:42.106907 ignition[952]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:42.107731 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37743->[::1]:53: read: connection refused Jan 13 22:24:42.308071 ignition[952]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 22:24:42.308715 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59621->[::1]:53: read: connection refused Jan 13 22:24:42.491802 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 13 22:24:42.493025 systemd-networkd[936]: eno1: Link UP Jan 13 22:24:42.493234 systemd-networkd[936]: eno2: Link UP Jan 13 22:24:42.493353 systemd-networkd[936]: enp2s0f0np0: Link UP Jan 13 22:24:42.493487 systemd-networkd[936]: enp2s0f0np0: Gained carrier Jan 13 22:24:42.502932 systemd-networkd[936]: enp2s0f1np1: Link UP Jan 13 22:24:42.520888 systemd-networkd[936]: enp2s0f0np0: DHCPv4 address 147.75.202.79/31, gateway 147.75.202.78 acquired from 145.40.83.140 Jan 13 22:24:42.708842 ignition[952]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 22:24:42.709992 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55045->[::1]:53: read: connection refused Jan 13 22:24:43.293425 systemd-networkd[936]: enp2s0f1np1: Gained carrier Jan 13 22:24:43.510419 ignition[952]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 22:24:43.511732 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48948->[::1]:53: read: connection refused Jan 13 22:24:43.805265 systemd-networkd[936]: enp2s0f0np0: Gained IPv6LL Jan 13 22:24:44.637271 systemd-networkd[936]: enp2s0f1np1: Gained IPv6LL Jan 13 22:24:45.112913 ignition[952]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 22:24:45.114174 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37211->[::1]:53: read: connection refused Jan 13 22:24:48.317619 ignition[952]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 22:24:48.986114 ignition[952]: GET result: OK Jan 13 22:24:49.403071 ignition[952]: Ignition finished successfully Jan 13 22:24:49.407941 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 22:24:49.433058 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 22:24:49.440443 ignition[971]: Ignition 2.19.0 Jan 13 22:24:49.440448 ignition[971]: Stage: disks Jan 13 22:24:49.440547 ignition[971]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:49.440553 ignition[971]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:49.441104 ignition[971]: disks: disks passed Jan 13 22:24:49.441106 ignition[971]: POST message to Packet Timeline Jan 13 22:24:49.441114 ignition[971]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:50.167886 ignition[971]: GET result: OK Jan 13 22:24:50.556734 ignition[971]: Ignition finished successfully Jan 13 22:24:50.560219 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 22:24:50.574910 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 22:24:50.593027 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 22:24:50.614037 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:24:50.635050 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:24:50.655053 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:24:50.684018 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 22:24:50.716579 systemd-fsck[990]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 22:24:50.726238 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 22:24:50.754937 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 22:24:50.851784 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 22:24:50.852172 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 22:24:50.860156 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 22:24:50.896992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:24:50.905722 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 22:24:51.032335 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (1000) Jan 13 22:24:51.032424 kernel: BTRFS info (device sda6): first mount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:51.032432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:51.032439 kernel: BTRFS info (device sda6): using free space tree Jan 13 22:24:51.032446 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 22:24:51.032453 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 22:24:50.947415 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 22:24:51.032699 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 22:24:51.064860 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 22:24:51.064882 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:24:51.120881 coreos-metadata[1018]: Jan 13 22:24:51.116 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:24:51.142951 coreos-metadata[1002]: Jan 13 22:24:51.116 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:24:51.084690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:24:51.110926 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 22:24:51.141004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 22:24:51.190893 initrd-setup-root[1032]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 22:24:51.200869 initrd-setup-root[1039]: cut: /sysroot/etc/group: No such file or directory Jan 13 22:24:51.210881 initrd-setup-root[1046]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 22:24:51.220860 initrd-setup-root[1053]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 22:24:51.227023 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 22:24:51.244874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 22:24:51.274894 coreos-metadata[1018]: Jan 13 22:24:51.233 INFO Fetch successful Jan 13 22:24:51.294000 kernel: BTRFS info (device sda6): last unmount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:51.249449 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 22:24:51.284442 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 22:24:51.284705 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 22:24:51.284748 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 22:24:51.328513 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 22:24:51.361926 ignition[1123]: INFO : Ignition 2.19.0 Jan 13 22:24:51.361926 ignition[1123]: INFO : Stage: mount Jan 13 22:24:51.361926 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:51.361926 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:51.361926 ignition[1123]: INFO : mount: mount passed Jan 13 22:24:51.361926 ignition[1123]: INFO : POST message to Packet Timeline Jan 13 22:24:51.361926 ignition[1123]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:51.769840 coreos-metadata[1002]: Jan 13 22:24:51.769 INFO Fetch successful Jan 13 22:24:51.844663 coreos-metadata[1002]: Jan 13 22:24:51.844 INFO wrote hostname ci-4081.3.0-a-8862dc3d2a to /sysroot/etc/hostname Jan 13 22:24:51.846180 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:24:52.026054 ignition[1123]: INFO : GET result: OK Jan 13 22:24:52.357150 ignition[1123]: INFO : Ignition finished successfully Jan 13 22:24:52.359854 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 22:24:52.393945 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 22:24:52.404991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:24:52.464713 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1147) Jan 13 22:24:52.464731 kernel: BTRFS info (device sda6): first mount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:52.485892 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:52.504711 kernel: BTRFS info (device sda6): using free space tree Jan 13 22:24:52.544131 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 22:24:52.544148 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 22:24:52.557932 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:24:52.587012 ignition[1164]: INFO : Ignition 2.19.0 Jan 13 22:24:52.587012 ignition[1164]: INFO : Stage: files Jan 13 22:24:52.602002 ignition[1164]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:52.602002 ignition[1164]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:52.602002 ignition[1164]: DEBUG : files: compiled without relabeling support, skipping Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 22:24:52.602002 ignition[1164]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 22:24:52.591529 unknown[1164]: wrote ssh authorized keys file for user: core Jan 13 22:24:52.735847 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 22:24:52.798856 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:24:52.798856 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 22:24:53.310276 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 22:24:53.530805 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:53.530805 ignition[1164]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: files passed Jan 13 22:24:53.561092 ignition[1164]: INFO : POST message to Packet Timeline Jan 13 22:24:53.561092 ignition[1164]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:54.184120 ignition[1164]: INFO : GET result: OK Jan 13 22:24:54.600534 ignition[1164]: INFO : Ignition finished successfully Jan 13 22:24:54.604233 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 22:24:54.637017 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 22:24:54.647365 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 22:24:54.668101 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 22:24:54.668176 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 22:24:54.725976 initrd-setup-root-after-ignition[1204]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:24:54.725976 initrd-setup-root-after-ignition[1204]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:24:54.765039 initrd-setup-root-after-ignition[1208]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:24:54.730144 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:24:54.751811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 22:24:54.791074 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 22:24:54.839308 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 22:24:54.839360 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 22:24:54.858139 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 22:24:54.878964 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 22:24:54.899158 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 22:24:54.914887 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 22:24:54.964481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:24:54.992196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 22:24:55.010256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:24:55.034054 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:24:55.046069 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 22:24:55.064085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 22:24:55.064231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:24:55.104210 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 22:24:55.114346 systemd[1]: Stopped target basic.target - Basic System. Jan 13 22:24:55.133351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 22:24:55.152381 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:24:55.173387 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 22:24:55.194392 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 22:24:55.214379 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:24:55.235365 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 22:24:55.257417 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 22:24:55.277363 systemd[1]: Stopped target swap.target - Swaps. Jan 13 22:24:55.297429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 22:24:55.297862 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:24:55.332207 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:24:55.342352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:24:55.363232 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 22:24:55.363680 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:24:55.385425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 22:24:55.385847 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 22:24:55.417336 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 22:24:55.417744 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:24:55.438691 systemd[1]: Stopped target paths.target - Path Units. Jan 13 22:24:55.457232 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 22:24:55.457629 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:24:55.478376 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 22:24:55.497349 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 22:24:55.516361 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 22:24:55.516664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:24:55.536389 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 22:24:55.536664 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:24:55.559436 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 22:24:55.676965 ignition[1228]: INFO : Ignition 2.19.0 Jan 13 22:24:55.676965 ignition[1228]: INFO : Stage: umount Jan 13 22:24:55.676965 ignition[1228]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:55.676965 ignition[1228]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:55.676965 ignition[1228]: INFO : umount: umount passed Jan 13 22:24:55.676965 ignition[1228]: INFO : POST message to Packet Timeline Jan 13 22:24:55.676965 ignition[1228]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:55.559824 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:24:55.578429 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 22:24:55.578787 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 22:24:55.596414 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 22:24:55.596782 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:24:55.629056 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 22:24:55.643888 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 22:24:55.644014 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:24:55.674065 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 22:24:55.677016 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 22:24:55.677180 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:24:55.685239 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 22:24:55.685440 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:24:55.735204 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 22:24:55.739278 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 22:24:55.739492 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 22:24:55.858548 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 22:24:55.858829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 22:24:56.340475 ignition[1228]: INFO : GET result: OK Jan 13 22:24:56.670449 ignition[1228]: INFO : Ignition finished successfully Jan 13 22:24:56.673207 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 22:24:56.673493 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 22:24:56.691058 systemd[1]: Stopped target network.target - Network. Jan 13 22:24:56.707042 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 22:24:56.707229 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 22:24:56.725235 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 22:24:56.725392 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 22:24:56.744201 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 22:24:56.744350 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 22:24:56.763185 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 22:24:56.763350 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 22:24:56.782206 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 22:24:56.782376 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 22:24:56.800529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 22:24:56.810897 systemd-networkd[936]: enp2s0f1np1: DHCPv6 lease lost Jan 13 22:24:56.818283 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 22:24:56.823003 systemd-networkd[936]: enp2s0f0np0: DHCPv6 lease lost Jan 13 22:24:56.836821 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 22:24:56.837085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 22:24:56.855929 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 22:24:56.856255 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 22:24:56.876242 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 22:24:56.876347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:24:56.919899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 22:24:56.935901 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 22:24:56.935944 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:24:56.955047 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 22:24:56.955131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:24:56.977147 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 22:24:56.977296 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 22:24:56.996132 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 22:24:56.996283 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:24:57.017329 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:24:57.038820 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 22:24:57.039158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:24:57.064385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 22:24:57.064433 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 22:24:57.068025 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 22:24:57.068056 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:24:57.094935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 22:24:57.094970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:24:57.125064 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 22:24:57.125118 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 22:24:57.164867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:24:57.164945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:57.209881 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 22:24:57.245841 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 22:24:57.245895 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:24:57.477993 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jan 13 22:24:57.264962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:24:57.265052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:57.288084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 22:24:57.288298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 22:24:57.328071 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 22:24:57.328329 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 22:24:57.348852 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 22:24:57.389268 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 22:24:57.412948 systemd[1]: Switching root. Jan 13 22:24:57.560937 systemd-journald[267]: Journal stopped Jan 13 22:24:36.999124 kernel: microcode: updated early: 0xde -> 0xfc, date = 2023-07-27 Jan 13 22:24:36.999138 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 22:24:36.999144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:24:36.999150 kernel: BIOS-provided physical RAM map: Jan 13 22:24:36.999153 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 22:24:36.999157 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 22:24:36.999162 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 22:24:36.999166 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 22:24:36.999170 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 22:24:36.999174 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000620bafff] usable Jan 13 22:24:36.999178 kernel: BIOS-e820: [mem 0x00000000620bb000-0x00000000620bbfff] ACPI NVS Jan 13 22:24:36.999183 kernel: BIOS-e820: [mem 0x00000000620bc000-0x00000000620bcfff] reserved Jan 13 22:24:36.999187 kernel: BIOS-e820: [mem 0x00000000620bd000-0x000000006c0c4fff] usable Jan 13 22:24:36.999191 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Jan 13 22:24:36.999197 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Jan 13 22:24:36.999201 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Jan 13 22:24:36.999207 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Jan 13 22:24:36.999211 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Jan 13 22:24:36.999216 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Jan 13 22:24:36.999220 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 22:24:36.999225 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 22:24:36.999229 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 22:24:36.999234 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 22:24:36.999238 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 22:24:36.999243 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Jan 13 22:24:36.999247 kernel: NX (Execute Disable) protection: active Jan 13 22:24:36.999252 kernel: APIC: Static calls initialized Jan 13 22:24:36.999258 kernel: SMBIOS 3.2.1 present. Jan 13 22:24:36.999262 kernel: DMI: Supermicro X11SCH-F/X11SCH-F, BIOS 1.5 11/17/2020 Jan 13 22:24:36.999267 kernel: tsc: Detected 3400.000 MHz processor Jan 13 22:24:36.999271 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 22:24:36.999276 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 22:24:36.999281 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 22:24:36.999286 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Jan 13 22:24:36.999291 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 22:24:36.999296 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 22:24:36.999301 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Jan 13 22:24:36.999306 kernel: Using GB pages for direct mapping Jan 13 22:24:36.999311 kernel: ACPI: Early table checksum verification disabled Jan 13 22:24:36.999316 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 22:24:36.999323 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 22:24:36.999328 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Jan 13 22:24:36.999333 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 22:24:36.999339 kernel: ACPI: FACS 0x000000006D762F80 000040 Jan 13 22:24:36.999344 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Jan 13 22:24:36.999349 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Jan 13 22:24:36.999354 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 22:24:36.999359 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 22:24:36.999364 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 22:24:36.999369 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 22:24:36.999374 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 22:24:36.999380 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 22:24:36.999385 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999390 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 22:24:36.999395 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 22:24:36.999400 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999405 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999410 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 22:24:36.999415 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 22:24:36.999421 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999426 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:24:36.999431 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 22:24:36.999436 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 13 22:24:36.999441 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 22:24:36.999446 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 22:24:36.999451 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 22:24:36.999456 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 ?b 01072009 AMI 00010013) Jan 13 22:24:36.999461 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 22:24:36.999467 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 22:24:36.999472 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 22:24:36.999477 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 22:24:36.999482 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 22:24:36.999487 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Jan 13 22:24:36.999492 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Jan 13 22:24:36.999497 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Jan 13 22:24:36.999502 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Jan 13 22:24:36.999507 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Jan 13 22:24:36.999513 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Jan 13 22:24:36.999518 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Jan 13 22:24:36.999523 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Jan 13 22:24:36.999528 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Jan 13 22:24:36.999533 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Jan 13 22:24:36.999538 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Jan 13 22:24:36.999543 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Jan 13 22:24:36.999548 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Jan 13 22:24:36.999552 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Jan 13 22:24:36.999558 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Jan 13 22:24:36.999563 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Jan 13 22:24:36.999568 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Jan 13 22:24:36.999573 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Jan 13 22:24:36.999578 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Jan 13 22:24:36.999583 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Jan 13 22:24:36.999588 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Jan 13 22:24:36.999593 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Jan 13 22:24:36.999598 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Jan 13 22:24:36.999604 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Jan 13 22:24:36.999609 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Jan 13 22:24:36.999613 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Jan 13 22:24:36.999618 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Jan 13 22:24:36.999623 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Jan 13 22:24:36.999628 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Jan 13 22:24:36.999633 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Jan 13 22:24:36.999638 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Jan 13 22:24:36.999643 kernel: No NUMA configuration found Jan 13 22:24:36.999649 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Jan 13 22:24:36.999655 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Jan 13 22:24:36.999660 kernel: Zone ranges: Jan 13 22:24:36.999665 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 22:24:36.999670 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 22:24:36.999675 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Jan 13 22:24:36.999680 kernel: Movable zone start for each node Jan 13 22:24:36.999685 kernel: Early memory node ranges Jan 13 22:24:36.999690 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 22:24:36.999695 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 22:24:36.999701 kernel: node 0: [mem 0x0000000040400000-0x00000000620bafff] Jan 13 22:24:36.999706 kernel: node 0: [mem 0x00000000620bd000-0x000000006c0c4fff] Jan 13 22:24:36.999710 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Jan 13 22:24:36.999716 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Jan 13 22:24:36.999725 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Jan 13 22:24:36.999730 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Jan 13 22:24:36.999735 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 22:24:36.999741 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 22:24:36.999747 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 22:24:36.999752 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 22:24:36.999757 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 13 22:24:36.999765 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 13 22:24:36.999770 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Jan 13 22:24:36.999776 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 22:24:36.999781 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 22:24:36.999787 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 22:24:36.999792 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 22:24:36.999798 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 22:24:36.999804 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 22:24:36.999809 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 22:24:36.999814 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 22:24:36.999820 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 22:24:36.999825 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 22:24:36.999830 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 22:24:36.999835 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 22:24:36.999841 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 22:24:36.999847 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 22:24:36.999852 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 22:24:36.999858 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 22:24:36.999863 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 22:24:36.999868 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 22:24:36.999874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 22:24:36.999879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 22:24:36.999884 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 22:24:36.999890 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 22:24:36.999896 kernel: TSC deadline timer available Jan 13 22:24:36.999902 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 22:24:36.999907 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Jan 13 22:24:36.999912 kernel: Booting paravirtualized kernel on bare hardware Jan 13 22:24:36.999918 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 22:24:36.999923 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 22:24:36.999929 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 22:24:36.999934 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 22:24:36.999939 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 22:24:36.999946 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:24:36.999952 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 22:24:36.999957 kernel: random: crng init done Jan 13 22:24:36.999962 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 22:24:36.999968 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 22:24:36.999973 kernel: Fallback order for Node 0: 0 Jan 13 22:24:36.999978 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Jan 13 22:24:36.999984 kernel: Policy zone: Normal Jan 13 22:24:36.999990 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 22:24:36.999995 kernel: software IO TLB: area num 16. Jan 13 22:24:37.000001 kernel: Memory: 32551316K/33281940K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 730364K reserved, 0K cma-reserved) Jan 13 22:24:37.000006 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 22:24:37.000012 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 22:24:37.000017 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 22:24:37.000022 kernel: Dynamic Preempt: voluntary Jan 13 22:24:37.000028 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 22:24:37.000033 kernel: rcu: RCU event tracing is enabled. Jan 13 22:24:37.000040 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 22:24:37.000045 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 22:24:37.000051 kernel: Rude variant of Tasks RCU enabled. Jan 13 22:24:37.000056 kernel: Tracing variant of Tasks RCU enabled. Jan 13 22:24:37.000061 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 22:24:37.000067 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 22:24:37.000072 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 22:24:37.000077 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 22:24:37.000083 kernel: Console: colour dummy device 80x25 Jan 13 22:24:37.000089 kernel: printk: console [tty0] enabled Jan 13 22:24:37.000094 kernel: printk: console [ttyS1] enabled Jan 13 22:24:37.000100 kernel: ACPI: Core revision 20230628 Jan 13 22:24:37.000105 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 13 22:24:37.000111 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 22:24:37.000116 kernel: DMAR: Host address width 39 Jan 13 22:24:37.000121 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 13 22:24:37.000127 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 13 22:24:37.000132 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 22:24:37.000137 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 22:24:37.000144 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Jan 13 22:24:37.000149 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Jan 13 22:24:37.000154 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 13 22:24:37.000159 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 22:24:37.000165 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 22:24:37.000170 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 22:24:37.000176 kernel: x2apic enabled Jan 13 22:24:37.000181 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 22:24:37.000186 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 22:24:37.000193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 22:24:37.000198 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 22:24:37.000204 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 22:24:37.000209 kernel: process: using mwait in idle threads Jan 13 22:24:37.000214 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 22:24:37.000220 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 22:24:37.000225 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 22:24:37.000230 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 22:24:37.000237 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 22:24:37.000242 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 22:24:37.000247 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 22:24:37.000253 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 22:24:37.000258 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 22:24:37.000264 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 22:24:37.000269 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 22:24:37.000274 kernel: TAA: Mitigation: TSX disabled Jan 13 22:24:37.000280 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 22:24:37.000286 kernel: SRBDS: Mitigation: Microcode Jan 13 22:24:37.000291 kernel: GDS: Mitigation: Microcode Jan 13 22:24:37.000297 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 22:24:37.000302 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 22:24:37.000307 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 22:24:37.000312 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 22:24:37.000318 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 22:24:37.000323 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 22:24:37.000328 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 22:24:37.000335 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 22:24:37.000340 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 22:24:37.000345 kernel: Freeing SMP alternatives memory: 32K Jan 13 22:24:37.000351 kernel: pid_max: default: 32768 minimum: 301 Jan 13 22:24:37.000356 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 22:24:37.000361 kernel: landlock: Up and running. Jan 13 22:24:37.000367 kernel: SELinux: Initializing. Jan 13 22:24:37.000372 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.000377 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.000384 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 22:24:37.000389 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:24:37.000394 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:24:37.000400 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:24:37.000405 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 22:24:37.000411 kernel: ... version: 4 Jan 13 22:24:37.000416 kernel: ... bit width: 48 Jan 13 22:24:37.000421 kernel: ... generic registers: 4 Jan 13 22:24:37.000426 kernel: ... value mask: 0000ffffffffffff Jan 13 22:24:37.000433 kernel: ... max period: 00007fffffffffff Jan 13 22:24:37.000438 kernel: ... fixed-purpose events: 3 Jan 13 22:24:37.000443 kernel: ... event mask: 000000070000000f Jan 13 22:24:37.000449 kernel: signal: max sigframe size: 2032 Jan 13 22:24:37.000454 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 22:24:37.000459 kernel: rcu: Hierarchical SRCU implementation. Jan 13 22:24:37.000465 kernel: rcu: Max phase no-delay instances is 400. Jan 13 22:24:37.000470 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 22:24:37.000475 kernel: smp: Bringing up secondary CPUs ... Jan 13 22:24:37.000482 kernel: smpboot: x86: Booting SMP configuration: Jan 13 22:24:37.000487 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 22:24:37.000493 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 22:24:37.000498 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 22:24:37.000504 kernel: smpboot: Max logical packages: 1 Jan 13 22:24:37.000509 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 22:24:37.000514 kernel: devtmpfs: initialized Jan 13 22:24:37.000520 kernel: x86/mm: Memory block size: 128MB Jan 13 22:24:37.000525 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x620bb000-0x620bbfff] (4096 bytes) Jan 13 22:24:37.000531 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Jan 13 22:24:37.000537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 22:24:37.000542 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 22:24:37.000547 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 22:24:37.000553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 22:24:37.000558 kernel: audit: initializing netlink subsys (disabled) Jan 13 22:24:37.000564 kernel: audit: type=2000 audit(1736807071.112:1): state=initialized audit_enabled=0 res=1 Jan 13 22:24:37.000569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 22:24:37.000575 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 22:24:37.000580 kernel: cpuidle: using governor menu Jan 13 22:24:37.000586 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 22:24:37.000591 kernel: dca service started, version 1.12.1 Jan 13 22:24:37.000596 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 22:24:37.000602 kernel: PCI: Using configuration type 1 for base access Jan 13 22:24:37.000607 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 22:24:37.000612 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 22:24:37.000618 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 22:24:37.000624 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 22:24:37.000629 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 22:24:37.000635 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 22:24:37.000640 kernel: ACPI: Added _OSI(Module Device) Jan 13 22:24:37.000645 kernel: ACPI: Added _OSI(Processor Device) Jan 13 22:24:37.000651 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 22:24:37.000656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 22:24:37.000661 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 22:24:37.000667 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000673 kernel: ACPI: SSDT 0xFFFF918F01CFF400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 22:24:37.000678 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000684 kernel: ACPI: SSDT 0xFFFF918F01CEB800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 22:24:37.000689 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000694 kernel: ACPI: SSDT 0xFFFF918F0024E200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 22:24:37.000699 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000705 kernel: ACPI: SSDT 0xFFFF918F01CE9800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 22:24:37.000710 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000715 kernel: ACPI: SSDT 0xFFFF918F0012F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 22:24:37.000720 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:24:37.000727 kernel: ACPI: SSDT 0xFFFF918F01CFC400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 22:24:37.000732 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 22:24:37.000737 kernel: ACPI: Interpreter enabled Jan 13 22:24:37.000743 kernel: ACPI: PM: (supports S0 S5) Jan 13 22:24:37.000748 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 22:24:37.000753 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 22:24:37.000759 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 22:24:37.000766 kernel: HEST: Table parsing has been initialized. Jan 13 22:24:37.000771 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 22:24:37.000795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 22:24:37.000801 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 22:24:37.000820 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 22:24:37.000826 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 22:24:37.000831 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 22:24:37.000836 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 22:24:37.000842 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 22:24:37.000847 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 22:24:37.000853 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 13 22:24:37.000859 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 22:24:37.000864 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 22:24:37.000870 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 22:24:37.000875 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 22:24:37.000880 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 22:24:37.000886 kernel: ACPI: \PIN_: New power resource Jan 13 22:24:37.000891 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 22:24:37.000961 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 22:24:37.001015 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 22:24:37.001062 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 22:24:37.001070 kernel: PCI host bridge to bus 0000:00 Jan 13 22:24:37.001118 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 22:24:37.001161 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 22:24:37.001201 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 22:24:37.001242 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Jan 13 22:24:37.001285 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 22:24:37.001325 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 22:24:37.001382 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 22:24:37.001435 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 22:24:37.001483 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.001533 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 13 22:24:37.001583 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.001631 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 13 22:24:37.001678 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Jan 13 22:24:37.001723 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 13 22:24:37.001772 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 13 22:24:37.001822 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 22:24:37.001869 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Jan 13 22:24:37.001920 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 22:24:37.001968 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Jan 13 22:24:37.002021 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 22:24:37.002068 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Jan 13 22:24:37.002115 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 22:24:37.002173 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 22:24:37.002223 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Jan 13 22:24:37.002269 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Jan 13 22:24:37.002318 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 22:24:37.002365 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:24:37.002416 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 22:24:37.002462 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:24:37.002515 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 22:24:37.002561 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Jan 13 22:24:37.002607 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 22:24:37.002656 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 22:24:37.002704 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Jan 13 22:24:37.002749 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 22:24:37.002871 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 22:24:37.002921 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Jan 13 22:24:37.002967 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 22:24:37.003016 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 22:24:37.003063 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Jan 13 22:24:37.003111 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Jan 13 22:24:37.003158 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 13 22:24:37.003203 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 13 22:24:37.003250 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 13 22:24:37.003295 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Jan 13 22:24:37.003342 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 22:24:37.003392 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 22:24:37.003442 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003492 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 22:24:37.003542 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003595 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 22:24:37.003641 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003693 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 22:24:37.003742 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003799 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 13 22:24:37.003847 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.003898 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 22:24:37.003944 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:24:37.003994 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 22:24:37.004046 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 22:24:37.004093 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Jan 13 22:24:37.004138 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 22:24:37.004188 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 22:24:37.004234 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 22:24:37.004283 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:24:37.004339 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 22:24:37.004387 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 22:24:37.004435 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Jan 13 22:24:37.004483 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 13 22:24:37.004531 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:24:37.004578 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:24:37.004630 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 22:24:37.004680 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 22:24:37.004728 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Jan 13 22:24:37.004778 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 13 22:24:37.004825 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:24:37.004874 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:24:37.004921 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 13 22:24:37.004969 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 13 22:24:37.005018 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:24:37.005066 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 13 22:24:37.005120 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 22:24:37.005169 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:24:37.005216 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Jan 13 22:24:37.005263 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 22:24:37.005311 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Jan 13 22:24:37.005358 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.005408 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 13 22:24:37.005454 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:24:37.005502 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 13 22:24:37.005555 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 13 22:24:37.005604 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:24:37.005653 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Jan 13 22:24:37.005701 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 22:24:37.005752 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Jan 13 22:24:37.005802 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:24:37.005850 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 13 22:24:37.005897 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:24:37.005944 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 13 22:24:37.005990 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 13 22:24:37.006043 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 22:24:37.006091 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 13 22:24:37.006142 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 22:24:37.006190 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:24:37.006237 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 13 22:24:37.006284 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.006330 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.006381 kernel: pci_bus 0000:08: extended config space not accessible Jan 13 22:24:37.006434 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 22:24:37.006488 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Jan 13 22:24:37.006537 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Jan 13 22:24:37.006587 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 22:24:37.006637 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 22:24:37.006686 kernel: pci 0000:08:00.0: supports D1 D2 Jan 13 22:24:37.006736 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:24:37.006791 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 13 22:24:37.006841 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.006890 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.006899 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 22:24:37.006905 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 22:24:37.006910 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 22:24:37.006916 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 22:24:37.006922 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 22:24:37.006928 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 22:24:37.006933 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 22:24:37.006940 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 22:24:37.006946 kernel: iommu: Default domain type: Translated Jan 13 22:24:37.006952 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 22:24:37.006957 kernel: PCI: Using ACPI for IRQ routing Jan 13 22:24:37.006963 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 22:24:37.006969 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 22:24:37.006974 kernel: e820: reserve RAM buffer [mem 0x620bb000-0x63ffffff] Jan 13 22:24:37.006980 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Jan 13 22:24:37.006985 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Jan 13 22:24:37.006992 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Jan 13 22:24:37.007040 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 13 22:24:37.007090 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 13 22:24:37.007140 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 22:24:37.007148 kernel: vgaarb: loaded Jan 13 22:24:37.007154 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 22:24:37.007159 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 13 22:24:37.007165 kernel: clocksource: Switched to clocksource tsc-early Jan 13 22:24:37.007171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 22:24:37.007178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 22:24:37.007184 kernel: pnp: PnP ACPI init Jan 13 22:24:37.007235 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 22:24:37.007282 kernel: pnp 00:02: [dma 0 disabled] Jan 13 22:24:37.007329 kernel: pnp 00:03: [dma 0 disabled] Jan 13 22:24:37.007374 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 22:24:37.007419 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 22:24:37.007465 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 22:24:37.007511 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 22:24:37.007554 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 22:24:37.007596 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 22:24:37.007639 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 22:24:37.007681 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 22:24:37.007727 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 22:24:37.007774 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 22:24:37.007818 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 22:24:37.007869 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 22:24:37.007912 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 22:24:37.007954 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 22:24:37.007997 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 22:24:37.008041 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 22:24:37.008083 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 22:24:37.008125 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 22:24:37.008171 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 22:24:37.008180 kernel: pnp: PnP ACPI: found 10 devices Jan 13 22:24:37.008186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 22:24:37.008192 kernel: NET: Registered PF_INET protocol family Jan 13 22:24:37.008199 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:24:37.008205 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 22:24:37.008211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 22:24:37.008216 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:24:37.008222 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 22:24:37.008228 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 22:24:37.008234 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.008239 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:24:37.008245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 22:24:37.008252 kernel: NET: Registered PF_XDP protocol family Jan 13 22:24:37.008299 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Jan 13 22:24:37.008347 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Jan 13 22:24:37.008394 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Jan 13 22:24:37.008441 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:24:37.008493 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008540 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008589 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008636 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:24:37.008684 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 13 22:24:37.008730 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 13 22:24:37.008782 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:24:37.008829 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 13 22:24:37.008878 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 13 22:24:37.008925 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:24:37.008972 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 13 22:24:37.009019 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 13 22:24:37.009066 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:24:37.009113 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 13 22:24:37.009159 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 13 22:24:37.009207 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 13 22:24:37.009257 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.009305 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.009350 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 13 22:24:37.009397 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 13 22:24:37.009444 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.009486 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 22:24:37.009529 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 22:24:37.009569 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 22:24:37.009611 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 22:24:37.009654 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Jan 13 22:24:37.009696 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 22:24:37.009742 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Jan 13 22:24:37.009790 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:24:37.009837 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 13 22:24:37.009880 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Jan 13 22:24:37.009929 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 13 22:24:37.009973 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Jan 13 22:24:37.010021 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 22:24:37.010065 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.010110 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 13 22:24:37.010155 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 13 22:24:37.010163 kernel: PCI: CLS 64 bytes, default 64 Jan 13 22:24:37.010170 kernel: DMAR: No ATSR found Jan 13 22:24:37.010176 kernel: DMAR: No SATC found Jan 13 22:24:37.010182 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 13 22:24:37.010188 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 13 22:24:37.010193 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 13 22:24:37.010199 kernel: DMAR: IOMMU feature pasid inconsistent Jan 13 22:24:37.010205 kernel: DMAR: IOMMU feature eafs inconsistent Jan 13 22:24:37.010210 kernel: DMAR: IOMMU feature prs inconsistent Jan 13 22:24:37.010216 kernel: DMAR: IOMMU feature nest inconsistent Jan 13 22:24:37.010223 kernel: DMAR: IOMMU feature mts inconsistent Jan 13 22:24:37.010229 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 13 22:24:37.010235 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 13 22:24:37.010240 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 22:24:37.010246 kernel: DMAR: dmar1: Using Queued invalidation Jan 13 22:24:37.010292 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 13 22:24:37.010340 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 13 22:24:37.010387 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 13 22:24:37.010434 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 13 22:24:37.010483 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 13 22:24:37.010530 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 13 22:24:37.010578 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 13 22:24:37.010624 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 13 22:24:37.010671 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 13 22:24:37.010716 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 13 22:24:37.010764 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 13 22:24:37.010811 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 13 22:24:37.010861 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 13 22:24:37.010907 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 13 22:24:37.010954 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 13 22:24:37.011000 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 13 22:24:37.011048 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 13 22:24:37.011093 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 13 22:24:37.011140 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 13 22:24:37.011187 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 13 22:24:37.011233 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 13 22:24:37.011283 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 13 22:24:37.011329 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 13 22:24:37.011378 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 13 22:24:37.011425 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 13 22:24:37.011473 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 22:24:37.011521 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 13 22:24:37.011570 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 13 22:24:37.011619 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 13 22:24:37.011629 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 22:24:37.011635 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 22:24:37.011641 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Jan 13 22:24:37.011647 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 13 22:24:37.011653 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 22:24:37.011658 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 22:24:37.011664 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 22:24:37.011670 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 13 22:24:37.011721 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 22:24:37.011730 kernel: Initialise system trusted keyrings Jan 13 22:24:37.011736 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 22:24:37.011741 kernel: Key type asymmetric registered Jan 13 22:24:37.011747 kernel: Asymmetric key parser 'x509' registered Jan 13 22:24:37.011753 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 22:24:37.011758 kernel: io scheduler mq-deadline registered Jan 13 22:24:37.011767 kernel: io scheduler kyber registered Jan 13 22:24:37.011773 kernel: io scheduler bfq registered Jan 13 22:24:37.011862 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 13 22:24:37.011908 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 13 22:24:37.011955 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 13 22:24:37.012002 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 13 22:24:37.012049 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 13 22:24:37.012095 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 13 22:24:37.012142 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 13 22:24:37.012196 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 22:24:37.012204 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 22:24:37.012210 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 22:24:37.012216 kernel: pstore: Using crash dump compression: deflate Jan 13 22:24:37.012222 kernel: pstore: Registered erst as persistent store backend Jan 13 22:24:37.012227 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 22:24:37.012233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 22:24:37.012239 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 22:24:37.012246 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 22:24:37.012293 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 22:24:37.012301 kernel: i8042: PNP: No PS/2 controller found. Jan 13 22:24:37.012343 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 22:24:37.012387 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 22:24:37.012430 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T22:24:35 UTC (1736807075) Jan 13 22:24:37.012473 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 22:24:37.012481 kernel: intel_pstate: Intel P-state driver initializing Jan 13 22:24:37.012489 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 22:24:37.012494 kernel: intel_pstate: HWP enabled Jan 13 22:24:37.012500 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 13 22:24:37.012506 kernel: vesafb: scrolling: redraw Jan 13 22:24:37.012511 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 13 22:24:37.012517 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x0000000080541909, using 768k, total 768k Jan 13 22:24:37.012523 kernel: Console: switching to colour frame buffer device 128x48 Jan 13 22:24:37.012528 kernel: fb0: VESA VGA frame buffer device Jan 13 22:24:37.012534 kernel: NET: Registered PF_INET6 protocol family Jan 13 22:24:37.012541 kernel: Segment Routing with IPv6 Jan 13 22:24:37.012546 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 22:24:37.012552 kernel: NET: Registered PF_PACKET protocol family Jan 13 22:24:37.012558 kernel: Key type dns_resolver registered Jan 13 22:24:37.012563 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 22:24:37.012569 kernel: IPI shorthand broadcast: enabled Jan 13 22:24:37.012575 kernel: sched_clock: Marking stable (1720491524, 1390614232)->(4574311019, -1463205263) Jan 13 22:24:37.012580 kernel: registered taskstats version 1 Jan 13 22:24:37.012586 kernel: Loading compiled-in X.509 certificates Jan 13 22:24:37.012592 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 22:24:37.012598 kernel: Key type .fscrypt registered Jan 13 22:24:37.012604 kernel: Key type fscrypt-provisioning registered Jan 13 22:24:37.012609 kernel: ima: Allocated hash algorithm: sha1 Jan 13 22:24:37.012615 kernel: ima: No architecture policies found Jan 13 22:24:37.012621 kernel: clk: Disabling unused clocks Jan 13 22:24:37.012626 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 22:24:37.012632 kernel: Write protecting the kernel read-only data: 36864k Jan 13 22:24:37.012637 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 22:24:37.012644 kernel: Run /init as init process Jan 13 22:24:37.012650 kernel: with arguments: Jan 13 22:24:37.012655 kernel: /init Jan 13 22:24:37.012661 kernel: with environment: Jan 13 22:24:37.012666 kernel: HOME=/ Jan 13 22:24:37.012672 kernel: TERM=linux Jan 13 22:24:37.012678 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 22:24:37.012684 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:24:37.012692 systemd[1]: Detected architecture x86-64. Jan 13 22:24:37.012698 systemd[1]: Running in initrd. Jan 13 22:24:37.012704 systemd[1]: No hostname configured, using default hostname. Jan 13 22:24:37.012710 systemd[1]: Hostname set to . Jan 13 22:24:37.012716 systemd[1]: Initializing machine ID from random generator. Jan 13 22:24:37.012722 systemd[1]: Queued start job for default target initrd.target. Jan 13 22:24:37.012728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:24:37.012735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:24:37.012741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 22:24:37.012747 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:24:37.012753 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 22:24:37.012759 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 22:24:37.012768 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 22:24:37.012774 kernel: tsc: Refined TSC clocksource calibration: 3407.985 MHz Jan 13 22:24:37.012781 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 22:24:37.012787 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc5a980c, max_idle_ns: 440795300013 ns Jan 13 22:24:37.012793 kernel: clocksource: Switched to clocksource tsc Jan 13 22:24:37.012798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:24:37.012804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:24:37.012810 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:24:37.012816 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:24:37.012822 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:24:37.012828 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:24:37.012835 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:24:37.012841 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:24:37.012847 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 22:24:37.012853 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 22:24:37.012859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:24:37.012865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:24:37.012871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:24:37.012877 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:24:37.012884 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 22:24:37.012889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:24:37.012895 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 22:24:37.012901 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 22:24:37.012907 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:24:37.012923 systemd-journald[267]: Collecting audit messages is disabled. Jan 13 22:24:37.012938 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:24:37.012944 systemd-journald[267]: Journal started Jan 13 22:24:37.012957 systemd-journald[267]: Runtime Journal (/run/log/journal/f3d707202c0249c4897a2bc921c8599f) is 8.0M, max 636.6M, 628.6M free. Jan 13 22:24:37.035466 systemd-modules-load[268]: Inserted module 'overlay' Jan 13 22:24:37.057765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:37.077783 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:24:37.087070 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 22:24:37.087222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:24:37.087338 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 22:24:37.130767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 22:24:37.148523 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 13 22:24:37.159998 kernel: Bridge firewalling registered Jan 13 22:24:37.151140 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:24:37.171385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:24:37.196079 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:24:37.217234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:37.238574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:24:37.259362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:24:37.297988 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:24:37.309333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:24:37.309722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:24:37.315311 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:24:37.315621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:24:37.316671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:24:37.326022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:37.337591 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 22:24:37.340819 systemd-resolved[297]: Positive Trust Anchors: Jan 13 22:24:37.340825 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:24:37.340862 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:24:37.343221 systemd-resolved[297]: Defaulting to hostname 'linux'. Jan 13 22:24:37.357996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:24:37.472982 dracut-cmdline[307]: dracut-dracut-053 Jan 13 22:24:37.472982 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:24:37.377961 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:24:37.587793 kernel: SCSI subsystem initialized Jan 13 22:24:37.610793 kernel: Loading iSCSI transport class v2.0-870. Jan 13 22:24:37.633845 kernel: iscsi: registered transport (tcp) Jan 13 22:24:37.664303 kernel: iscsi: registered transport (qla4xxx) Jan 13 22:24:37.664320 kernel: QLogic iSCSI HBA Driver Jan 13 22:24:37.697316 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 22:24:37.719053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 22:24:37.776722 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 22:24:37.776742 kernel: device-mapper: uevent: version 1.0.3 Jan 13 22:24:37.796406 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 22:24:37.853844 kernel: raid6: avx2x4 gen() 53437 MB/s Jan 13 22:24:37.885841 kernel: raid6: avx2x2 gen() 53898 MB/s Jan 13 22:24:37.922191 kernel: raid6: avx2x1 gen() 45243 MB/s Jan 13 22:24:37.922209 kernel: raid6: using algorithm avx2x2 gen() 53898 MB/s Jan 13 22:24:37.969293 kernel: raid6: .... xor() 31409 MB/s, rmw enabled Jan 13 22:24:37.969311 kernel: raid6: using avx2x2 recovery algorithm Jan 13 22:24:38.009793 kernel: xor: automatically using best checksumming function avx Jan 13 22:24:38.122806 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 22:24:38.128160 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:24:38.157093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:24:38.163706 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jan 13 22:24:38.168005 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:24:38.197025 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 22:24:38.247331 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 13 22:24:38.263689 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:24:38.285005 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:24:38.345112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:24:38.402873 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 22:24:38.402890 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 22:24:38.402898 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 22:24:38.427778 kernel: ACPI: bus type USB registered Jan 13 22:24:38.427798 kernel: usbcore: registered new interface driver usbfs Jan 13 22:24:38.442825 kernel: usbcore: registered new interface driver hub Jan 13 22:24:38.457481 kernel: usbcore: registered new device driver usb Jan 13 22:24:38.472660 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 22:24:38.502122 kernel: PTP clock support registered Jan 13 22:24:38.502147 kernel: libata version 3.00 loaded. Jan 13 22:24:38.502162 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 22:24:38.502176 kernel: AES CTR mode by8 optimization enabled Jan 13 22:24:38.496992 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 22:24:39.331832 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 22:24:39.331932 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:24:39.332002 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 13 22:24:39.332064 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 22:24:39.332125 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 22:24:39.332186 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 22:24:39.332246 kernel: scsi host0: ahci Jan 13 22:24:39.332316 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:24:39.332377 kernel: scsi host1: ahci Jan 13 22:24:39.332438 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 22:24:39.332498 kernel: scsi host2: ahci Jan 13 22:24:39.332558 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 22:24:39.332620 kernel: scsi host3: ahci Jan 13 22:24:39.332685 kernel: hub 1-0:1.0: USB hub found Jan 13 22:24:39.332755 kernel: scsi host4: ahci Jan 13 22:24:39.332822 kernel: hub 1-0:1.0: 16 ports detected Jan 13 22:24:39.332882 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 22:24:39.332891 kernel: scsi host5: ahci Jan 13 22:24:39.332948 kernel: scsi host6: ahci Jan 13 22:24:39.333005 kernel: scsi host7: ahci Jan 13 22:24:39.333064 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Jan 13 22:24:39.333074 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Jan 13 22:24:39.333081 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Jan 13 22:24:39.333088 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Jan 13 22:24:39.333096 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Jan 13 22:24:39.333103 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Jan 13 22:24:39.333110 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Jan 13 22:24:39.333118 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Jan 13 22:24:39.333126 kernel: hub 2-0:1.0: USB hub found Jan 13 22:24:39.333188 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 22:24:39.333197 kernel: pps pps0: new PPS source ptp0 Jan 13 22:24:39.333261 kernel: hub 2-0:1.0: 10 ports detected Jan 13 22:24:39.333320 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 13 22:24:39.333388 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 22:24:39.522289 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:24:39.522376 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:24:39.522386 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1c:a6 Jan 13 22:24:39.522454 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 13 22:24:39.522518 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522527 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:24:39.522590 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522599 kernel: pps pps1: new PPS source ptp1 Jan 13 22:24:39.522662 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:24:39.522673 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 13 22:24:39.522738 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522747 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:24:39.522820 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:24:39.522829 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1c:a7 Jan 13 22:24:39.522890 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522899 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 13 22:24:39.522960 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.522970 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:24:39.523031 kernel: hub 1-14:1.0: USB hub found Jan 13 22:24:39.523107 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 22:24:39.523115 kernel: hub 1-14:1.0: 4 ports detected Jan 13 22:24:39.523185 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:24:39.523193 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:24:39.523201 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Jan 13 22:24:39.834807 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:24:39.834823 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:24:39.834900 kernel: ata1.00: Features: NCQ-prio Jan 13 22:24:39.834909 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 22:24:39.835018 kernel: ata2.00: Features: NCQ-prio Jan 13 22:24:39.835027 kernel: ata1.00: configured for UDMA/133 Jan 13 22:24:39.835035 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:24:39.835113 kernel: ata2.00: configured for UDMA/133 Jan 13 22:24:39.835121 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:24:40.002533 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 13 22:24:40.002669 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:40.002685 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 13 22:24:40.002803 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:24:40.002909 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:24:40.002924 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:24:40.003054 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 13 22:24:40.003153 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 13 22:24:40.003250 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 22:24:40.003347 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:24:40.003443 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 22:24:40.003539 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:24:40.003554 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 13 22:24:40.003649 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 22:24:40.003665 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 13 22:24:40.003774 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:24:40.003881 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 13 22:24:40.003982 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 13 22:24:40.004080 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 22:24:40.004172 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:24:40.004263 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 22:24:40.004353 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:40.004367 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:24:40.004465 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 22:24:40.004479 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Jan 13 22:24:40.394156 kernel: GPT:9289727 != 937703087 Jan 13 22:24:40.394168 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 22:24:40.394176 kernel: GPT:9289727 != 937703087 Jan 13 22:24:40.394183 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 22:24:40.394191 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:40.394198 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:24:40.394275 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 13 22:24:40.394344 kernel: usbcore: registered new interface driver usbhid Jan 13 22:24:40.394353 kernel: usbhid: USB HID core driver Jan 13 22:24:40.394360 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (547) Jan 13 22:24:40.394368 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (568) Jan 13 22:24:40.394376 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 22:24:40.394383 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:24:40.394446 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 13 22:24:40.394509 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 22:24:40.394580 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 22:24:40.394589 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 22:24:40.394654 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:40.394662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:40.394670 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:24:40.394733 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:38.566591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:24:40.443316 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:39.368195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:24:40.476824 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 13 22:24:39.436663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:24:39.466901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:24:39.467005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:39.495919 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:24:40.477246 disk-uuid[718]: Primary Header is updated. Jan 13 22:24:40.477246 disk-uuid[718]: Secondary Entries is updated. Jan 13 22:24:40.477246 disk-uuid[718]: Secondary Header is updated. Jan 13 22:24:39.533201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 22:24:39.801830 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:24:39.802149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:39.819863 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.157963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.179225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:24:40.197533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 22:24:40.248578 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 22:24:40.288389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 22:24:40.309925 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:24:40.320956 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:24:40.339956 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 22:24:40.355847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:24:40.355876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:40.377812 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.424149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:24:40.497767 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 13 22:24:40.681480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:40.714075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:24:40.756571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:41.402162 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:24:41.422590 disk-uuid[719]: The operation has completed successfully. Jan 13 22:24:41.430980 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 22:24:41.459352 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 22:24:41.459402 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 22:24:41.497008 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 22:24:41.536866 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 22:24:41.536934 sh[752]: Success Jan 13 22:24:41.572546 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 22:24:41.599251 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 22:24:41.608084 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 22:24:41.676401 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 22:24:41.676422 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:41.698812 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 22:24:41.718879 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 22:24:41.737862 kernel: BTRFS info (device dm-0): using free space tree Jan 13 22:24:41.777807 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 22:24:41.780731 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 22:24:41.789297 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 22:24:41.794921 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 22:24:41.909278 kernel: BTRFS info (device sda6): first mount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:41.909364 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:41.909373 kernel: BTRFS info (device sda6): using free space tree Jan 13 22:24:41.909380 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 22:24:41.909388 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 22:24:41.830267 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 22:24:41.945872 kernel: BTRFS info (device sda6): last unmount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:41.938981 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 22:24:41.968967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 22:24:41.984352 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:24:42.015946 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:24:42.033548 ignition[862]: Ignition 2.19.0 Jan 13 22:24:42.026975 systemd-networkd[936]: lo: Link UP Jan 13 22:24:42.033552 ignition[862]: Stage: fetch-offline Jan 13 22:24:42.026977 systemd-networkd[936]: lo: Gained carrier Jan 13 22:24:42.033570 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:42.029431 systemd-networkd[936]: Enumeration completed Jan 13 22:24:42.033575 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:42.029476 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:24:42.033626 ignition[862]: parsed url from cmdline: "" Jan 13 22:24:42.030085 systemd-networkd[936]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.033628 ignition[862]: no config URL provided Jan 13 22:24:42.035602 unknown[862]: fetched base config from "system" Jan 13 22:24:42.033630 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:24:42.035606 unknown[862]: fetched user config from "system" Jan 13 22:24:42.033652 ignition[862]: parsing config with SHA512: eb8e87c588681bb665361d414a9d4588495cca98e352b13128de1729d105d8a4bfe5c2a9188462373542cc5376e15bf67561f7568a4c767b5337695889f8d931 Jan 13 22:24:42.046155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:24:42.036573 ignition[862]: fetch-offline: fetch-offline passed Jan 13 22:24:42.059625 systemd-networkd[936]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.036578 ignition[862]: POST message to Packet Timeline Jan 13 22:24:42.063139 systemd[1]: Reached target network.target - Network. Jan 13 22:24:42.036582 ignition[862]: POST Status error: resource requires networking Jan 13 22:24:42.077918 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 22:24:42.036668 ignition[862]: Ignition finished successfully Jan 13 22:24:42.273855 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 13 22:24:42.084980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 22:24:42.105493 ignition[952]: Ignition 2.19.0 Jan 13 22:24:42.088039 systemd-networkd[936]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.105502 ignition[952]: Stage: kargs Jan 13 22:24:42.268018 systemd-networkd[936]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:24:42.105715 ignition[952]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:42.105729 ignition[952]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:42.106885 ignition[952]: kargs: kargs passed Jan 13 22:24:42.106891 ignition[952]: POST message to Packet Timeline Jan 13 22:24:42.106907 ignition[952]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:42.107731 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37743->[::1]:53: read: connection refused Jan 13 22:24:42.308071 ignition[952]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 22:24:42.308715 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59621->[::1]:53: read: connection refused Jan 13 22:24:42.491802 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 13 22:24:42.493025 systemd-networkd[936]: eno1: Link UP Jan 13 22:24:42.493234 systemd-networkd[936]: eno2: Link UP Jan 13 22:24:42.493353 systemd-networkd[936]: enp2s0f0np0: Link UP Jan 13 22:24:42.493487 systemd-networkd[936]: enp2s0f0np0: Gained carrier Jan 13 22:24:42.502932 systemd-networkd[936]: enp2s0f1np1: Link UP Jan 13 22:24:42.520888 systemd-networkd[936]: enp2s0f0np0: DHCPv4 address 147.75.202.79/31, gateway 147.75.202.78 acquired from 145.40.83.140 Jan 13 22:24:42.708842 ignition[952]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 22:24:42.709992 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55045->[::1]:53: read: connection refused Jan 13 22:24:43.293425 systemd-networkd[936]: enp2s0f1np1: Gained carrier Jan 13 22:24:43.510419 ignition[952]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 22:24:43.511732 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48948->[::1]:53: read: connection refused Jan 13 22:24:43.805265 systemd-networkd[936]: enp2s0f0np0: Gained IPv6LL Jan 13 22:24:44.637271 systemd-networkd[936]: enp2s0f1np1: Gained IPv6LL Jan 13 22:24:45.112913 ignition[952]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 22:24:45.114174 ignition[952]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37211->[::1]:53: read: connection refused Jan 13 22:24:48.317619 ignition[952]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 22:24:48.986114 ignition[952]: GET result: OK Jan 13 22:24:49.403071 ignition[952]: Ignition finished successfully Jan 13 22:24:49.407941 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 22:24:49.433058 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 22:24:49.440443 ignition[971]: Ignition 2.19.0 Jan 13 22:24:49.440448 ignition[971]: Stage: disks Jan 13 22:24:49.440547 ignition[971]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:49.440553 ignition[971]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:49.441104 ignition[971]: disks: disks passed Jan 13 22:24:49.441106 ignition[971]: POST message to Packet Timeline Jan 13 22:24:49.441114 ignition[971]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:50.167886 ignition[971]: GET result: OK Jan 13 22:24:50.556734 ignition[971]: Ignition finished successfully Jan 13 22:24:50.560219 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 22:24:50.574910 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 22:24:50.593027 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 22:24:50.614037 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:24:50.635050 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:24:50.655053 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:24:50.684018 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 22:24:50.716579 systemd-fsck[990]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 22:24:50.726238 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 22:24:50.754937 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 22:24:50.851784 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 22:24:50.852172 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 22:24:50.860156 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 22:24:50.896992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:24:50.905722 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 22:24:51.032335 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (1000) Jan 13 22:24:51.032424 kernel: BTRFS info (device sda6): first mount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:51.032432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:51.032439 kernel: BTRFS info (device sda6): using free space tree Jan 13 22:24:51.032446 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 22:24:51.032453 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 22:24:50.947415 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 22:24:51.032699 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 22:24:51.064860 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 22:24:51.064882 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:24:51.120881 coreos-metadata[1018]: Jan 13 22:24:51.116 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:24:51.142951 coreos-metadata[1002]: Jan 13 22:24:51.116 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:24:51.084690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:24:51.110926 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 22:24:51.141004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 22:24:51.190893 initrd-setup-root[1032]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 22:24:51.200869 initrd-setup-root[1039]: cut: /sysroot/etc/group: No such file or directory Jan 13 22:24:51.210881 initrd-setup-root[1046]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 22:24:51.220860 initrd-setup-root[1053]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 22:24:51.227023 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 22:24:51.244874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 22:24:51.274894 coreos-metadata[1018]: Jan 13 22:24:51.233 INFO Fetch successful Jan 13 22:24:51.294000 kernel: BTRFS info (device sda6): last unmount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:51.249449 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 22:24:51.284442 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 22:24:51.284705 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 22:24:51.284748 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 22:24:51.328513 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 22:24:51.361926 ignition[1123]: INFO : Ignition 2.19.0 Jan 13 22:24:51.361926 ignition[1123]: INFO : Stage: mount Jan 13 22:24:51.361926 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:51.361926 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:51.361926 ignition[1123]: INFO : mount: mount passed Jan 13 22:24:51.361926 ignition[1123]: INFO : POST message to Packet Timeline Jan 13 22:24:51.361926 ignition[1123]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:51.769840 coreos-metadata[1002]: Jan 13 22:24:51.769 INFO Fetch successful Jan 13 22:24:51.844663 coreos-metadata[1002]: Jan 13 22:24:51.844 INFO wrote hostname ci-4081.3.0-a-8862dc3d2a to /sysroot/etc/hostname Jan 13 22:24:51.846180 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:24:52.026054 ignition[1123]: INFO : GET result: OK Jan 13 22:24:52.357150 ignition[1123]: INFO : Ignition finished successfully Jan 13 22:24:52.359854 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 22:24:52.393945 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 22:24:52.404991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:24:52.464713 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1147) Jan 13 22:24:52.464731 kernel: BTRFS info (device sda6): first mount of filesystem 97b32d8a-f9c6-4033-9b3a-f91a977b5bd4 Jan 13 22:24:52.485892 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:24:52.504711 kernel: BTRFS info (device sda6): using free space tree Jan 13 22:24:52.544131 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 22:24:52.544148 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 22:24:52.557932 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:24:52.587012 ignition[1164]: INFO : Ignition 2.19.0 Jan 13 22:24:52.587012 ignition[1164]: INFO : Stage: files Jan 13 22:24:52.602002 ignition[1164]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:52.602002 ignition[1164]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:52.602002 ignition[1164]: DEBUG : files: compiled without relabeling support, skipping Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 22:24:52.602002 ignition[1164]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:24:52.602002 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 22:24:52.591529 unknown[1164]: wrote ssh authorized keys file for user: core Jan 13 22:24:52.735847 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 22:24:52.798856 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:24:52.798856 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:52.831974 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 22:24:53.310276 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 22:24:53.530805 ignition[1164]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 22:24:53.530805 ignition[1164]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:24:53.561092 ignition[1164]: INFO : files: files passed Jan 13 22:24:53.561092 ignition[1164]: INFO : POST message to Packet Timeline Jan 13 22:24:53.561092 ignition[1164]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:54.184120 ignition[1164]: INFO : GET result: OK Jan 13 22:24:54.600534 ignition[1164]: INFO : Ignition finished successfully Jan 13 22:24:54.604233 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 22:24:54.637017 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 22:24:54.647365 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 22:24:54.668101 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 22:24:54.668176 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 22:24:54.725976 initrd-setup-root-after-ignition[1204]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:24:54.725976 initrd-setup-root-after-ignition[1204]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:24:54.765039 initrd-setup-root-after-ignition[1208]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:24:54.730144 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:24:54.751811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 22:24:54.791074 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 22:24:54.839308 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 22:24:54.839360 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 22:24:54.858139 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 22:24:54.878964 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 22:24:54.899158 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 22:24:54.914887 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 22:24:54.964481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:24:54.992196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 22:24:55.010256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:24:55.034054 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:24:55.046069 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 22:24:55.064085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 22:24:55.064231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:24:55.104210 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 22:24:55.114346 systemd[1]: Stopped target basic.target - Basic System. Jan 13 22:24:55.133351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 22:24:55.152381 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:24:55.173387 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 22:24:55.194392 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 22:24:55.214379 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:24:55.235365 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 22:24:55.257417 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 22:24:55.277363 systemd[1]: Stopped target swap.target - Swaps. Jan 13 22:24:55.297429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 22:24:55.297862 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:24:55.332207 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:24:55.342352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:24:55.363232 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 22:24:55.363680 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:24:55.385425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 22:24:55.385847 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 22:24:55.417336 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 22:24:55.417744 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:24:55.438691 systemd[1]: Stopped target paths.target - Path Units. Jan 13 22:24:55.457232 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 22:24:55.457629 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:24:55.478376 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 22:24:55.497349 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 22:24:55.516361 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 22:24:55.516664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:24:55.536389 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 22:24:55.536664 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:24:55.559436 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 22:24:55.676965 ignition[1228]: INFO : Ignition 2.19.0 Jan 13 22:24:55.676965 ignition[1228]: INFO : Stage: umount Jan 13 22:24:55.676965 ignition[1228]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:24:55.676965 ignition[1228]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:24:55.676965 ignition[1228]: INFO : umount: umount passed Jan 13 22:24:55.676965 ignition[1228]: INFO : POST message to Packet Timeline Jan 13 22:24:55.676965 ignition[1228]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:24:55.559824 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:24:55.578429 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 22:24:55.578787 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 22:24:55.596414 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 22:24:55.596782 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:24:55.629056 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 22:24:55.643888 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 22:24:55.644014 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:24:55.674065 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 22:24:55.677016 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 22:24:55.677180 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:24:55.685239 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 22:24:55.685440 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:24:55.735204 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 22:24:55.739278 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 22:24:55.739492 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 22:24:55.858548 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 22:24:55.858829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 22:24:56.340475 ignition[1228]: INFO : GET result: OK Jan 13 22:24:56.670449 ignition[1228]: INFO : Ignition finished successfully Jan 13 22:24:56.673207 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 22:24:56.673493 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 22:24:56.691058 systemd[1]: Stopped target network.target - Network. Jan 13 22:24:56.707042 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 22:24:56.707229 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 22:24:56.725235 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 22:24:56.725392 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 22:24:56.744201 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 22:24:56.744350 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 22:24:56.763185 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 22:24:56.763350 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 22:24:56.782206 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 22:24:56.782376 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 22:24:56.800529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 22:24:56.810897 systemd-networkd[936]: enp2s0f1np1: DHCPv6 lease lost Jan 13 22:24:56.818283 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 22:24:56.823003 systemd-networkd[936]: enp2s0f0np0: DHCPv6 lease lost Jan 13 22:24:56.836821 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 22:24:56.837085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 22:24:56.855929 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 22:24:56.856255 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 22:24:56.876242 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 22:24:56.876347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:24:56.919899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 22:24:56.935901 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 22:24:56.935944 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:24:56.955047 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 22:24:56.955131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:24:56.977147 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 22:24:56.977296 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 22:24:56.996132 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 22:24:56.996283 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:24:57.017329 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:24:57.038820 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 22:24:57.039158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:24:57.064385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 22:24:57.064433 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 22:24:57.068025 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 22:24:57.068056 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:24:57.094935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 22:24:57.094970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:24:57.125064 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 22:24:57.125118 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 22:24:57.164867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:24:57.164945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:24:57.209881 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 22:24:57.245841 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 22:24:57.245895 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:24:57.477993 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jan 13 22:24:57.264962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:24:57.265052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:24:57.288084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 22:24:57.288298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 22:24:57.328071 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 22:24:57.328329 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 22:24:57.348852 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 22:24:57.389268 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 22:24:57.412948 systemd[1]: Switching root. Jan 13 22:24:57.560937 systemd-journald[267]: Journal stopped Jan 13 22:25:00.151701 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 22:25:00.151717 kernel: SELinux: policy capability open_perms=1 Jan 13 22:25:00.151724 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 22:25:00.151731 kernel: SELinux: policy capability always_check_network=0 Jan 13 22:25:00.151736 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 22:25:00.151742 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 22:25:00.151748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 22:25:00.151753 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 22:25:00.151759 kernel: audit: type=1403 audit(1736807097.792:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 22:25:00.151769 systemd[1]: Successfully loaded SELinux policy in 165.116ms. Jan 13 22:25:00.151778 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.932ms. Jan 13 22:25:00.151785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:25:00.151791 systemd[1]: Detected architecture x86-64. Jan 13 22:25:00.151797 systemd[1]: Detected first boot. Jan 13 22:25:00.151804 systemd[1]: Hostname set to . Jan 13 22:25:00.151812 systemd[1]: Initializing machine ID from random generator. Jan 13 22:25:00.151819 zram_generator::config[1281]: No configuration found. Jan 13 22:25:00.151826 systemd[1]: Populated /etc with preset unit settings. Jan 13 22:25:00.151832 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 22:25:00.151839 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 22:25:00.151845 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 22:25:00.151853 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 22:25:00.151860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 22:25:00.151866 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 22:25:00.151873 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 22:25:00.151880 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 22:25:00.151888 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 22:25:00.151895 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 22:25:00.151902 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 22:25:00.151909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:25:00.151916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:25:00.151923 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 22:25:00.151930 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 22:25:00.151937 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 22:25:00.151944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:25:00.151950 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 13 22:25:00.151958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:25:00.151965 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 22:25:00.151971 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 22:25:00.151978 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 22:25:00.151986 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 22:25:00.151993 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:25:00.152000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:25:00.152007 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:25:00.152015 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:25:00.152022 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 22:25:00.152029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 22:25:00.152035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:25:00.152042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:25:00.152049 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:25:00.152058 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 22:25:00.152065 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 22:25:00.152072 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 22:25:00.152079 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 22:25:00.152086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:00.152093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 22:25:00.152100 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 22:25:00.152108 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 22:25:00.152115 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 22:25:00.152122 systemd[1]: Reached target machines.target - Containers. Jan 13 22:25:00.152129 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 22:25:00.152136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:25:00.152143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:25:00.152151 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 22:25:00.152157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:25:00.152166 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:25:00.152173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:25:00.152180 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 22:25:00.152187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:25:00.152195 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 22:25:00.152202 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 22:25:00.152209 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 22:25:00.152215 kernel: ACPI: bus type drm_connector registered Jan 13 22:25:00.152222 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 22:25:00.152230 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 22:25:00.152237 kernel: fuse: init (API version 7.39) Jan 13 22:25:00.152243 kernel: loop: module loaded Jan 13 22:25:00.152249 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:25:00.152264 systemd-journald[1384]: Collecting audit messages is disabled. Jan 13 22:25:00.152280 systemd-journald[1384]: Journal started Jan 13 22:25:00.152294 systemd-journald[1384]: Runtime Journal (/run/log/journal/9fd1d5302c574700aa133193b2947c15) is 8.0M, max 636.6M, 628.6M free. Jan 13 22:24:58.305675 systemd[1]: Queued start job for default target multi-user.target. Jan 13 22:24:58.331138 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 22:24:58.331428 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 22:25:00.180812 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:25:00.214830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 22:25:00.248819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 22:25:00.282841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:25:00.315665 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 22:25:00.315693 systemd[1]: Stopped verity-setup.service. Jan 13 22:25:00.376811 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:00.396804 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:25:00.408322 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 22:25:00.418029 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 22:25:00.428004 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 22:25:00.438001 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 22:25:00.447986 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 22:25:00.458005 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 22:25:00.468127 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 22:25:00.479198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:25:00.490321 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 22:25:00.490541 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 22:25:00.502641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:25:00.503076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:25:00.514630 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:25:00.515066 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:25:00.525624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:25:00.525982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:25:00.537622 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 22:25:00.537984 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 22:25:00.548680 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:25:00.549048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:25:00.559635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:25:00.569593 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 22:25:00.581584 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 22:25:00.593602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:25:00.613576 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 22:25:00.632954 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 22:25:00.644693 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 22:25:00.653924 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 22:25:00.653959 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:25:00.665336 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 22:25:00.690018 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 22:25:00.701619 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 22:25:00.711006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:25:00.712062 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 22:25:00.722361 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 22:25:00.732895 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:25:00.743263 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 22:25:00.746631 systemd-journald[1384]: Time spent on flushing to /var/log/journal/9fd1d5302c574700aa133193b2947c15 is 13.577ms for 1401 entries. Jan 13 22:25:00.746631 systemd-journald[1384]: System Journal (/var/log/journal/9fd1d5302c574700aa133193b2947c15) is 8.0M, max 195.6M, 187.6M free. Jan 13 22:25:00.792195 systemd-journald[1384]: Received client request to flush runtime journal. Jan 13 22:25:00.760885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:25:00.761611 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:25:00.769631 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 22:25:00.779748 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 22:25:00.788252 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 22:25:00.813769 kernel: loop0: detected capacity change from 0 to 211296 Jan 13 22:25:00.814755 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 22:25:00.848809 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 22:25:00.849927 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 22:25:00.862151 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 22:25:00.873081 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 22:25:00.883998 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 22:25:00.894990 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:25:00.904979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 22:25:00.923832 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 22:25:00.926820 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 22:25:00.949258 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 22:25:00.960552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:25:00.972437 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 22:25:00.972889 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 22:25:00.984338 udevadm[1420]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 22:25:00.991164 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Jan 13 22:25:00.991174 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Jan 13 22:25:00.993487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:25:01.010769 kernel: loop2: detected capacity change from 0 to 8 Jan 13 22:25:01.061814 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 22:25:01.069852 ldconfig[1411]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 22:25:01.071808 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 22:25:01.134816 kernel: loop4: detected capacity change from 0 to 211296 Jan 13 22:25:01.170933 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 22:25:01.179826 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 22:25:01.208812 kernel: loop6: detected capacity change from 0 to 8 Jan 13 22:25:01.227795 kernel: loop7: detected capacity change from 0 to 142488 Jan 13 22:25:01.227989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:25:01.238561 (sd-merge)[1441]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 13 22:25:01.238794 (sd-merge)[1441]: Merged extensions into '/usr'. Jan 13 22:25:01.239892 systemd-udevd[1443]: Using default interface naming scheme 'v255'. Jan 13 22:25:01.241444 systemd[1]: Reloading requested from client PID 1416 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 22:25:01.241451 systemd[1]: Reloading... Jan 13 22:25:01.278775 zram_generator::config[1468]: No configuration found. Jan 13 22:25:01.302751 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 13 22:25:01.302819 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1482) Jan 13 22:25:01.302839 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 22:25:01.342776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 22:25:01.380773 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 22:25:01.396785 kernel: IPMI message handler: version 39.2 Jan 13 22:25:01.396863 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 13 22:25:01.471141 kernel: ACPI: button: Power Button [PWRF] Jan 13 22:25:01.471183 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 13 22:25:01.471363 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 13 22:25:01.420632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:25:01.475966 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 13 22:25:01.476010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 22:25:01.477767 kernel: ipmi device interface Jan 13 22:25:01.477786 kernel: iTCO_vendor_support: vendor-support=0 Jan 13 22:25:01.516772 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 13 22:25:01.517004 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 13 22:25:01.529634 systemd[1]: Reloading finished in 287 ms. Jan 13 22:25:01.576654 kernel: ipmi_si: IPMI System Interface driver Jan 13 22:25:01.576700 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 13 22:25:01.621169 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 13 22:25:01.621181 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 13 22:25:01.621190 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 13 22:25:01.690297 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 13 22:25:01.690378 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 13 22:25:01.690451 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 13 22:25:01.690464 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 13 22:25:01.731768 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Jan 13 22:25:01.737406 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:25:01.760016 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 22:25:01.772768 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 13 22:25:01.804021 kernel: intel_rapl_common: Found RAPL domain package Jan 13 22:25:01.804065 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Jan 13 22:25:01.804158 kernel: intel_rapl_common: Found RAPL domain core Jan 13 22:25:01.804171 kernel: intel_rapl_common: Found RAPL domain uncore Jan 13 22:25:01.804179 kernel: intel_rapl_common: Found RAPL domain dram Jan 13 22:25:01.836767 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 13 22:25:01.895767 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 13 22:25:01.897052 systemd[1]: Starting ensure-sysext.service... Jan 13 22:25:01.905380 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 22:25:01.928213 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:25:01.939348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:25:01.939905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:25:01.940150 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 22:25:01.941925 systemd[1]: Reloading requested from client PID 1618 ('systemctl') (unit ensure-sysext.service)... Jan 13 22:25:01.941931 systemd[1]: Reloading... Jan 13 22:25:01.980771 zram_generator::config[1650]: No configuration found. Jan 13 22:25:01.988968 systemd-tmpfiles[1623]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 22:25:01.989177 systemd-tmpfiles[1623]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 22:25:01.989668 systemd-tmpfiles[1623]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 22:25:01.989841 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Jan 13 22:25:01.989878 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Jan 13 22:25:01.991755 systemd-tmpfiles[1623]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:25:01.991760 systemd-tmpfiles[1623]: Skipping /boot Jan 13 22:25:01.996005 systemd-tmpfiles[1623]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:25:01.996009 systemd-tmpfiles[1623]: Skipping /boot Jan 13 22:25:02.034844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:25:02.087641 systemd[1]: Reloading finished in 145 ms. Jan 13 22:25:02.112029 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 22:25:02.123967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:25:02.134926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:25:02.162050 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 22:25:02.173673 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 22:25:02.181686 augenrules[1731]: No rules Jan 13 22:25:02.185533 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 22:25:02.197614 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 22:25:02.204757 lvm[1736]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:25:02.210208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:25:02.220748 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 22:25:02.245562 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 22:25:02.255678 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 22:25:02.265017 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 22:25:02.277021 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 22:25:02.288083 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 22:25:02.299131 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 22:25:02.308912 systemd-networkd[1621]: lo: Link UP Jan 13 22:25:02.308915 systemd-networkd[1621]: lo: Gained carrier Jan 13 22:25:02.311281 systemd-networkd[1621]: bond0: netdev ready Jan 13 22:25:02.312202 systemd-networkd[1621]: Enumeration completed Jan 13 22:25:02.312343 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:25:02.313302 systemd-networkd[1621]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:a3:fc.network. Jan 13 22:25:02.323498 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:25:02.332949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:02.333102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:25:02.334511 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 22:25:02.346441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:25:02.348377 lvm[1753]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:25:02.356528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:25:02.363673 systemd-resolved[1738]: Positive Trust Anchors: Jan 13 22:25:02.363679 systemd-resolved[1738]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:25:02.363705 systemd-resolved[1738]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:25:02.366764 systemd-resolved[1738]: Using system hostname 'ci-4081.3.0-a-8862dc3d2a'. Jan 13 22:25:02.368545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:25:02.378002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:25:02.378727 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 22:25:02.390565 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 22:25:02.400925 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 22:25:02.400999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:02.402088 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 22:25:02.413204 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 22:25:02.423344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:25:02.423469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:25:02.435435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:25:02.435597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:25:02.447988 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:25:02.448298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:25:02.458953 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 22:25:02.483851 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 13 22:25:02.514355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:02.514852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:25:02.516792 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Jan 13 22:25:02.516757 systemd-networkd[1621]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:a3:fd.network. Jan 13 22:25:02.533359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:25:02.545328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:25:02.556345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:25:02.565969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:25:02.566045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 22:25:02.566094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:02.566614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:25:02.566687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:25:02.578110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:25:02.578179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:25:02.589008 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:25:02.589079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:25:02.602397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:02.602540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:25:02.616055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:25:02.626920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:25:02.641725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:25:02.653594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:25:02.662998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:25:02.663138 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 22:25:02.663246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:25:02.664114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:25:02.664239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:25:02.675251 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:25:02.675395 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:25:02.691303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:25:02.691487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:25:02.692847 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 13 22:25:02.711317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:25:02.711500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:25:02.714541 systemd-networkd[1621]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 13 22:25:02.714771 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Jan 13 22:25:02.716283 systemd-networkd[1621]: enp2s0f0np0: Link UP Jan 13 22:25:02.716600 systemd-networkd[1621]: enp2s0f0np0: Gained carrier Jan 13 22:25:02.733166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:25:02.736812 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 13 22:25:02.746088 systemd-networkd[1621]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:a3:fc.network. Jan 13 22:25:02.746352 systemd-networkd[1621]: enp2s0f1np1: Link UP Jan 13 22:25:02.746634 systemd-networkd[1621]: enp2s0f1np1: Gained carrier Jan 13 22:25:02.748909 systemd[1]: Finished ensure-sysext.service. Jan 13 22:25:02.754975 systemd-networkd[1621]: bond0: Link UP Jan 13 22:25:02.755282 systemd-networkd[1621]: bond0: Gained carrier Jan 13 22:25:02.759723 systemd[1]: Reached target network.target - Network. Jan 13 22:25:02.767828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:25:02.778814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:25:02.778844 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:25:02.794892 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 22:25:02.843765 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 13 22:25:02.851266 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 22:25:02.863768 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 13 22:25:02.863824 kernel: bond0: active interface up! Jan 13 22:25:02.887906 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:25:02.897872 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 22:25:02.908847 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 22:25:02.919844 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 22:25:02.930839 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 22:25:02.930855 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:25:02.938838 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 22:25:02.947905 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 22:25:02.957898 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 22:25:02.968838 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:25:02.977041 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 22:25:02.987520 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 22:25:02.999669 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 22:25:03.009063 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 22:25:03.019849 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:25:03.029792 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:25:03.038811 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:25:03.038823 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:25:03.046822 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 22:25:03.056415 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 22:25:03.067296 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 22:25:03.076401 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 22:25:03.079685 coreos-metadata[1787]: Jan 13 22:25:03.079 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:25:03.085921 dbus-daemon[1788]: [system] SELinux support is enabled Jan 13 22:25:03.087437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 22:25:03.089206 jq[1791]: false Jan 13 22:25:03.096895 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 22:25:03.097481 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 22:25:03.104611 extend-filesystems[1793]: Found loop4 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found loop5 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found loop6 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found loop7 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda1 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda2 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda3 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found usr Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda4 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda6 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda7 Jan 13 22:25:03.106968 extend-filesystems[1793]: Found sda9 Jan 13 22:25:03.106968 extend-filesystems[1793]: Checking size of /dev/sda9 Jan 13 22:25:03.265798 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jan 13 22:25:03.265816 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1501) Jan 13 22:25:03.265826 extend-filesystems[1793]: Resized partition /dev/sda9 Jan 13 22:25:03.107480 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 22:25:03.265973 extend-filesystems[1801]: resize2fs 1.47.1 (20-May-2024) Jan 13 22:25:03.165588 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 22:25:03.210143 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 22:25:03.251880 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 22:25:03.289672 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 13 22:25:03.297457 systemd-logind[1813]: Watching system buttons on /dev/input/event3 (Power Button) Jan 13 22:25:03.297468 systemd-logind[1813]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 22:25:03.297478 systemd-logind[1813]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 13 22:25:03.297649 systemd-logind[1813]: New seat seat0. Jan 13 22:25:03.298466 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 22:25:03.298836 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 22:25:03.308417 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 22:25:03.315591 update_engine[1818]: I20250113 22:25:03.315557 1818 main.cc:92] Flatcar Update Engine starting Jan 13 22:25:03.316391 update_engine[1818]: I20250113 22:25:03.316351 1818 update_check_scheduler.cc:74] Next update check in 2m54s Jan 13 22:25:03.319043 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 22:25:03.320660 jq[1819]: true Jan 13 22:25:03.330120 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 22:25:03.346952 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 22:25:03.347075 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 22:25:03.347287 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 22:25:03.347421 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 22:25:03.357298 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 22:25:03.357382 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 22:25:03.370597 (ntainerd)[1823]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 22:25:03.372224 jq[1822]: true Jan 13 22:25:03.374040 dbus-daemon[1788]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 22:25:03.376951 tar[1821]: linux-amd64/helm Jan 13 22:25:03.382449 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 13 22:25:03.382546 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 13 22:25:03.384367 systemd[1]: Started update-engine.service - Update Engine. Jan 13 22:25:03.395430 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 22:25:03.395556 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 22:25:03.406934 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 22:25:03.407035 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 22:25:03.429369 bash[1850]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:25:03.434940 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 22:25:03.447103 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 22:25:03.459438 systemd[1]: Starting sshkeys.service... Jan 13 22:25:03.462008 locksmithd[1852]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 22:25:03.471111 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 22:25:03.483525 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 22:25:03.505867 coreos-metadata[1859]: Jan 13 22:25:03.505 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:25:03.513764 sshd_keygen[1816]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 22:25:03.526617 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 22:25:03.548012 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 22:25:03.555459 containerd[1823]: time="2025-01-13T22:25:03.555413857Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 22:25:03.557114 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 22:25:03.557213 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 22:25:03.568103 containerd[1823]: time="2025-01-13T22:25:03.568053846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:25:03.568815 containerd[1823]: time="2025-01-13T22:25:03.568792916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:25:03.568815 containerd[1823]: time="2025-01-13T22:25:03.568809784Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 22:25:03.568858 containerd[1823]: time="2025-01-13T22:25:03.568822316Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 22:25:03.568960 containerd[1823]: time="2025-01-13T22:25:03.568906834Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 22:25:03.568960 containerd[1823]: time="2025-01-13T22:25:03.568919358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 22:25:03.568960 containerd[1823]: time="2025-01-13T22:25:03.568953452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:25:03.568960 containerd[1823]: time="2025-01-13T22:25:03.568961877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569096 containerd[1823]: time="2025-01-13T22:25:03.569055198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569096 containerd[1823]: time="2025-01-13T22:25:03.569065112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569096 containerd[1823]: time="2025-01-13T22:25:03.569072538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569096 containerd[1823]: time="2025-01-13T22:25:03.569077916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569166 containerd[1823]: time="2025-01-13T22:25:03.569120085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569273 containerd[1823]: time="2025-01-13T22:25:03.569234310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569304 containerd[1823]: time="2025-01-13T22:25:03.569294181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:25:03.569423 containerd[1823]: time="2025-01-13T22:25:03.569384150Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 22:25:03.569495 containerd[1823]: time="2025-01-13T22:25:03.569481201Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 22:25:03.569541 containerd[1823]: time="2025-01-13T22:25:03.569524884Z" level=info msg="metadata content store policy set" policy=shared Jan 13 22:25:03.580872 containerd[1823]: time="2025-01-13T22:25:03.580816416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 22:25:03.580872 containerd[1823]: time="2025-01-13T22:25:03.580851376Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 22:25:03.580872 containerd[1823]: time="2025-01-13T22:25:03.580862281Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 22:25:03.580872 containerd[1823]: time="2025-01-13T22:25:03.580871206Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 22:25:03.580964 containerd[1823]: time="2025-01-13T22:25:03.580879600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 22:25:03.580983 containerd[1823]: time="2025-01-13T22:25:03.580967379Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 22:25:03.581143 containerd[1823]: time="2025-01-13T22:25:03.581111838Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 22:25:03.581219 containerd[1823]: time="2025-01-13T22:25:03.581179807Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 22:25:03.581219 containerd[1823]: time="2025-01-13T22:25:03.581190147Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 22:25:03.581219 containerd[1823]: time="2025-01-13T22:25:03.581197886Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 22:25:03.581219 containerd[1823]: time="2025-01-13T22:25:03.581205581Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581219 containerd[1823]: time="2025-01-13T22:25:03.581212710Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581218986Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581243011Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581252212Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581259403Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581265930Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581277584Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581289831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581302 containerd[1823]: time="2025-01-13T22:25:03.581299444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581306779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581313816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581328008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581336013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581342417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581349331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581355948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581363601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581375959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581382572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581388915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581405 containerd[1823]: time="2025-01-13T22:25:03.581396823Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581407717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581414359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581420134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581449607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581459330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581465563Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581472024Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581477204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581492565Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581500878Z" level=info msg="NRI interface is disabled by configuration." Jan 13 22:25:03.581567 containerd[1823]: time="2025-01-13T22:25:03.581506402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 22:25:03.581712 containerd[1823]: time="2025-01-13T22:25:03.581675272Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 22:25:03.581789 containerd[1823]: time="2025-01-13T22:25:03.581716131Z" level=info msg="Connect containerd service" Jan 13 22:25:03.581789 containerd[1823]: time="2025-01-13T22:25:03.581733862Z" level=info msg="using legacy CRI server" Jan 13 22:25:03.581789 containerd[1823]: time="2025-01-13T22:25:03.581743617Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 22:25:03.581836 containerd[1823]: time="2025-01-13T22:25:03.581809781Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 22:25:03.582191 containerd[1823]: time="2025-01-13T22:25:03.582149152Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 22:25:03.582284 containerd[1823]: time="2025-01-13T22:25:03.582262068Z" level=info msg="Start subscribing containerd event" Jan 13 22:25:03.582317 containerd[1823]: time="2025-01-13T22:25:03.582294328Z" level=info msg="Start recovering state" Jan 13 22:25:03.582344 containerd[1823]: time="2025-01-13T22:25:03.582323743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 22:25:03.582376 containerd[1823]: time="2025-01-13T22:25:03.582348056Z" level=info msg="Start event monitor" Jan 13 22:25:03.582376 containerd[1823]: time="2025-01-13T22:25:03.582349444Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 22:25:03.582376 containerd[1823]: time="2025-01-13T22:25:03.582359879Z" level=info msg="Start snapshots syncer" Jan 13 22:25:03.582376 containerd[1823]: time="2025-01-13T22:25:03.582372911Z" level=info msg="Start cni network conf syncer for default" Jan 13 22:25:03.582455 containerd[1823]: time="2025-01-13T22:25:03.582377603Z" level=info msg="Start streaming server" Jan 13 22:25:03.582455 containerd[1823]: time="2025-01-13T22:25:03.582409010Z" level=info msg="containerd successfully booted in 0.027441s" Jan 13 22:25:03.590026 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 22:25:03.600001 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 22:25:03.610200 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 22:25:03.634018 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 22:25:03.645417 tar[1821]: linux-amd64/LICENSE Jan 13 22:25:03.645464 tar[1821]: linux-amd64/README.md Jan 13 22:25:03.655012 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 13 22:25:03.669184 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 22:25:03.680766 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jan 13 22:25:03.706387 extend-filesystems[1801]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 22:25:03.706387 extend-filesystems[1801]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 13 22:25:03.706387 extend-filesystems[1801]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jan 13 22:25:03.728895 extend-filesystems[1793]: Resized filesystem in /dev/sda9 Jan 13 22:25:03.728895 extend-filesystems[1793]: Found sdb Jan 13 22:25:03.706898 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 22:25:03.706990 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 22:25:03.763995 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 22:25:04.348898 systemd-networkd[1621]: bond0: Gained IPv6LL Jan 13 22:25:04.350820 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 22:25:04.362318 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 22:25:04.382019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:04.392492 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 22:25:04.410337 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 22:25:05.018792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:05.036959 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:25:05.441578 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Jan 13 22:25:05.442013 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Jan 13 22:25:05.650440 kubelet[1922]: E0113 22:25:05.650303 1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:25:05.661540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:25:05.661612 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:25:05.662652 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 22:25:05.677981 systemd[1]: Started sshd@0-147.75.202.79:22-139.178.89.65:43278.service - OpenSSH per-connection server daemon (139.178.89.65:43278). Jan 13 22:25:05.715917 sshd[1946]: Accepted publickey for core from 139.178.89.65 port 43278 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:05.717166 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:05.723117 systemd-logind[1813]: New session 1 of user core. Jan 13 22:25:05.724254 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 22:25:05.746020 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 22:25:05.758952 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 22:25:05.782031 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 22:25:05.798011 (systemd)[1950]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 22:25:05.868657 systemd[1950]: Queued start job for default target default.target. Jan 13 22:25:05.876470 systemd[1950]: Created slice app.slice - User Application Slice. Jan 13 22:25:05.876484 systemd[1950]: Reached target paths.target - Paths. Jan 13 22:25:05.876493 systemd[1950]: Reached target timers.target - Timers. Jan 13 22:25:05.877119 systemd[1950]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 22:25:05.882558 systemd[1950]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 22:25:05.882586 systemd[1950]: Reached target sockets.target - Sockets. Jan 13 22:25:05.882595 systemd[1950]: Reached target basic.target - Basic System. Jan 13 22:25:05.882615 systemd[1950]: Reached target default.target - Main User Target. Jan 13 22:25:05.882631 systemd[1950]: Startup finished in 81ms. Jan 13 22:25:05.882774 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 22:25:05.903025 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 22:25:05.974090 systemd[1]: Started sshd@1-147.75.202.79:22-139.178.89.65:43282.service - OpenSSH per-connection server daemon (139.178.89.65:43282). Jan 13 22:25:06.010684 sshd[1961]: Accepted publickey for core from 139.178.89.65 port 43282 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:06.011350 sshd[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:06.013948 systemd-logind[1813]: New session 2 of user core. Jan 13 22:25:06.030011 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 22:25:06.089163 sshd[1961]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:06.100480 systemd[1]: sshd@1-147.75.202.79:22-139.178.89.65:43282.service: Deactivated successfully. Jan 13 22:25:06.101807 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 22:25:06.102935 systemd-logind[1813]: Session 2 logged out. Waiting for processes to exit. Jan 13 22:25:06.104246 systemd[1]: Started sshd@2-147.75.202.79:22-139.178.89.65:43288.service - OpenSSH per-connection server daemon (139.178.89.65:43288). Jan 13 22:25:06.116479 systemd-logind[1813]: Removed session 2. Jan 13 22:25:06.144170 sshd[1968]: Accepted publickey for core from 139.178.89.65 port 43288 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:06.144980 sshd[1968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:06.148061 systemd-logind[1813]: New session 3 of user core. Jan 13 22:25:06.157062 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 22:25:06.231416 sshd[1968]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:06.238638 systemd[1]: sshd@2-147.75.202.79:22-139.178.89.65:43288.service: Deactivated successfully. Jan 13 22:25:06.242585 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 22:25:06.244563 systemd-logind[1813]: Session 3 logged out. Waiting for processes to exit. Jan 13 22:25:06.247114 systemd-logind[1813]: Removed session 3. Jan 13 22:25:08.219628 coreos-metadata[1787]: Jan 13 22:25:08.219 INFO Fetch successful Jan 13 22:25:08.273044 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 22:25:08.284425 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 13 22:25:08.773578 systemd-resolved[1738]: Clock change detected. Flushing caches. Jan 13 22:25:08.773762 systemd-timesyncd[1782]: Contacted time server 108.61.73.244:123 (0.flatcar.pool.ntp.org). Jan 13 22:25:08.773902 systemd-timesyncd[1782]: Initial clock synchronization to Mon 2025-01-13 22:25:08.773489 UTC. Jan 13 22:25:09.080199 login[1900]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:25:09.083418 systemd-logind[1813]: New session 4 of user core. Jan 13 22:25:09.084818 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 22:25:09.099382 login[1899]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:25:09.101996 systemd-logind[1813]: New session 5 of user core. Jan 13 22:25:09.103106 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 22:25:09.167731 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 13 22:25:09.193141 coreos-metadata[1859]: Jan 13 22:25:09.193 INFO Fetch successful Jan 13 22:25:09.262478 unknown[1859]: wrote ssh authorized keys file for user: core Jan 13 22:25:09.382802 update-ssh-keys[2005]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:25:09.383513 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 22:25:09.384426 systemd[1]: Finished sshkeys.service. Jan 13 22:25:09.384894 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 22:25:09.385109 systemd[1]: Startup finished in 1.910s (kernel) + 21.781s (initrd) + 11.323s (userspace) = 35.014s. Jan 13 22:25:16.347230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 22:25:16.357667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:16.582985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:16.588870 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:25:16.613599 kubelet[2017]: E0113 22:25:16.613455 2017 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:25:16.615871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:25:16.615969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:25:16.676854 systemd[1]: Started sshd@3-147.75.202.79:22-139.178.89.65:39542.service - OpenSSH per-connection server daemon (139.178.89.65:39542). Jan 13 22:25:16.711549 sshd[2036]: Accepted publickey for core from 139.178.89.65 port 39542 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:16.712358 sshd[2036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:16.715392 systemd-logind[1813]: New session 6 of user core. Jan 13 22:25:16.728766 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 22:25:16.792051 sshd[2036]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:16.809681 systemd[1]: sshd@3-147.75.202.79:22-139.178.89.65:39542.service: Deactivated successfully. Jan 13 22:25:16.813117 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 22:25:16.816372 systemd-logind[1813]: Session 6 logged out. Waiting for processes to exit. Jan 13 22:25:16.831197 systemd[1]: Started sshd@4-147.75.202.79:22-139.178.89.65:39544.service - OpenSSH per-connection server daemon (139.178.89.65:39544). Jan 13 22:25:16.833713 systemd-logind[1813]: Removed session 6. Jan 13 22:25:16.879574 sshd[2043]: Accepted publickey for core from 139.178.89.65 port 39544 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:16.880344 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:16.882960 systemd-logind[1813]: New session 7 of user core. Jan 13 22:25:16.900679 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 22:25:16.948254 sshd[2043]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:16.962705 systemd[1]: sshd@4-147.75.202.79:22-139.178.89.65:39544.service: Deactivated successfully. Jan 13 22:25:16.966102 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 22:25:16.969206 systemd-logind[1813]: Session 7 logged out. Waiting for processes to exit. Jan 13 22:25:16.987627 systemd[1]: Started sshd@5-147.75.202.79:22-139.178.89.65:39556.service - OpenSSH per-connection server daemon (139.178.89.65:39556). Jan 13 22:25:16.990180 systemd-logind[1813]: Removed session 7. Jan 13 22:25:17.038209 sshd[2050]: Accepted publickey for core from 139.178.89.65 port 39556 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:17.038824 sshd[2050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:17.041258 systemd-logind[1813]: New session 8 of user core. Jan 13 22:25:17.065738 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 22:25:17.122505 sshd[2050]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:17.145751 systemd[1]: sshd@5-147.75.202.79:22-139.178.89.65:39556.service: Deactivated successfully. Jan 13 22:25:17.149576 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 22:25:17.152856 systemd-logind[1813]: Session 8 logged out. Waiting for processes to exit. Jan 13 22:25:17.176551 systemd[1]: Started sshd@6-147.75.202.79:22-139.178.89.65:39570.service - OpenSSH per-connection server daemon (139.178.89.65:39570). Jan 13 22:25:17.179390 systemd-logind[1813]: Removed session 8. Jan 13 22:25:17.231169 sshd[2057]: Accepted publickey for core from 139.178.89.65 port 39570 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:17.233317 sshd[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:17.240806 systemd-logind[1813]: New session 9 of user core. Jan 13 22:25:17.257974 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 22:25:17.324132 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 22:25:17.324283 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:25:17.343504 sudo[2060]: pam_unix(sudo:session): session closed for user root Jan 13 22:25:17.344727 sshd[2057]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:17.365767 systemd[1]: sshd@6-147.75.202.79:22-139.178.89.65:39570.service: Deactivated successfully. Jan 13 22:25:17.369369 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 22:25:17.372774 systemd-logind[1813]: Session 9 logged out. Waiting for processes to exit. Jan 13 22:25:17.391134 systemd[1]: Started sshd@7-147.75.202.79:22-139.178.89.65:39578.service - OpenSSH per-connection server daemon (139.178.89.65:39578). Jan 13 22:25:17.393488 systemd-logind[1813]: Removed session 9. Jan 13 22:25:17.440654 sshd[2065]: Accepted publickey for core from 139.178.89.65 port 39578 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:17.441405 sshd[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:17.443912 systemd-logind[1813]: New session 10 of user core. Jan 13 22:25:17.460881 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 22:25:17.517416 sudo[2069]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 22:25:17.518185 sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:25:17.526013 sudo[2069]: pam_unix(sudo:session): session closed for user root Jan 13 22:25:17.538738 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 22:25:17.539480 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:25:17.571957 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 22:25:17.573216 auditctl[2072]: No rules Jan 13 22:25:17.573410 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 22:25:17.573521 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 22:25:17.574918 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 22:25:17.593141 augenrules[2090]: No rules Jan 13 22:25:17.593627 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 22:25:17.594364 sudo[2068]: pam_unix(sudo:session): session closed for user root Jan 13 22:25:17.595753 sshd[2065]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:17.598641 systemd[1]: sshd@7-147.75.202.79:22-139.178.89.65:39578.service: Deactivated successfully. Jan 13 22:25:17.599805 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 22:25:17.600418 systemd-logind[1813]: Session 10 logged out. Waiting for processes to exit. Jan 13 22:25:17.602151 systemd[1]: Started sshd@8-147.75.202.79:22-139.178.89.65:39590.service - OpenSSH per-connection server daemon (139.178.89.65:39590). Jan 13 22:25:17.603025 systemd-logind[1813]: Removed session 10. Jan 13 22:25:17.639628 sshd[2098]: Accepted publickey for core from 139.178.89.65 port 39590 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:25:17.641466 sshd[2098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:25:17.648598 systemd-logind[1813]: New session 11 of user core. Jan 13 22:25:17.669937 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 22:25:17.729224 sudo[2101]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 22:25:17.729375 sudo[2101]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:25:18.015808 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 22:25:18.015864 (dockerd)[2129]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 22:25:18.272195 dockerd[2129]: time="2025-01-13T22:25:18.272132961Z" level=info msg="Starting up" Jan 13 22:25:18.349535 dockerd[2129]: time="2025-01-13T22:25:18.349497780Z" level=info msg="Loading containers: start." Jan 13 22:25:18.420453 kernel: Initializing XFRM netlink socket Jan 13 22:25:18.467021 systemd-networkd[1621]: docker0: Link UP Jan 13 22:25:18.482392 dockerd[2129]: time="2025-01-13T22:25:18.482373523Z" level=info msg="Loading containers: done." Jan 13 22:25:18.490391 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck427679072-merged.mount: Deactivated successfully. Jan 13 22:25:18.491229 dockerd[2129]: time="2025-01-13T22:25:18.491184117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 22:25:18.491268 dockerd[2129]: time="2025-01-13T22:25:18.491232677Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 22:25:18.491293 dockerd[2129]: time="2025-01-13T22:25:18.491284624Z" level=info msg="Daemon has completed initialization" Jan 13 22:25:18.506383 dockerd[2129]: time="2025-01-13T22:25:18.506325856Z" level=info msg="API listen on /run/docker.sock" Jan 13 22:25:18.506517 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 22:25:19.325589 containerd[1823]: time="2025-01-13T22:25:19.325538855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 22:25:19.889620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391086519.mount: Deactivated successfully. Jan 13 22:25:21.038452 containerd[1823]: time="2025-01-13T22:25:21.038393903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:21.038658 containerd[1823]: time="2025-01-13T22:25:21.038595417Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 22:25:21.038991 containerd[1823]: time="2025-01-13T22:25:21.038950906Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:21.040868 containerd[1823]: time="2025-01-13T22:25:21.040825776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:21.041348 containerd[1823]: time="2025-01-13T22:25:21.041303859Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.7157422s" Jan 13 22:25:21.041348 containerd[1823]: time="2025-01-13T22:25:21.041320557Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 22:25:21.052241 containerd[1823]: time="2025-01-13T22:25:21.052192276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 22:25:22.546276 containerd[1823]: time="2025-01-13T22:25:22.546248733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:22.546520 containerd[1823]: time="2025-01-13T22:25:22.546389807Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 22:25:22.546920 containerd[1823]: time="2025-01-13T22:25:22.546879545Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:22.548449 containerd[1823]: time="2025-01-13T22:25:22.548405363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:22.549064 containerd[1823]: time="2025-01-13T22:25:22.549019867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.496804635s" Jan 13 22:25:22.549064 containerd[1823]: time="2025-01-13T22:25:22.549038864Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 22:25:22.560885 containerd[1823]: time="2025-01-13T22:25:22.560818153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 22:25:23.554951 containerd[1823]: time="2025-01-13T22:25:23.554893109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:23.555158 containerd[1823]: time="2025-01-13T22:25:23.555028080Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 22:25:23.555494 containerd[1823]: time="2025-01-13T22:25:23.555458111Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:23.557106 containerd[1823]: time="2025-01-13T22:25:23.557064705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:23.557762 containerd[1823]: time="2025-01-13T22:25:23.557712196Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 996.871389ms" Jan 13 22:25:23.557762 containerd[1823]: time="2025-01-13T22:25:23.557735879Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 22:25:23.568873 containerd[1823]: time="2025-01-13T22:25:23.568824519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 22:25:24.407353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162606219.mount: Deactivated successfully. Jan 13 22:25:24.569802 containerd[1823]: time="2025-01-13T22:25:24.569745325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:24.570015 containerd[1823]: time="2025-01-13T22:25:24.569963534Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 22:25:24.570309 containerd[1823]: time="2025-01-13T22:25:24.570266435Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:24.571206 containerd[1823]: time="2025-01-13T22:25:24.571159971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:24.571617 containerd[1823]: time="2025-01-13T22:25:24.571572450Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.002726117s" Jan 13 22:25:24.571617 containerd[1823]: time="2025-01-13T22:25:24.571587686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 22:25:24.582623 containerd[1823]: time="2025-01-13T22:25:24.582572856Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 22:25:25.176387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90036781.mount: Deactivated successfully. Jan 13 22:25:25.737377 containerd[1823]: time="2025-01-13T22:25:25.737319277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:25.737590 containerd[1823]: time="2025-01-13T22:25:25.737518324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 22:25:25.737975 containerd[1823]: time="2025-01-13T22:25:25.737961961Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:25.739577 containerd[1823]: time="2025-01-13T22:25:25.739536090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:25.740169 containerd[1823]: time="2025-01-13T22:25:25.740126991Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.157531533s" Jan 13 22:25:25.740169 containerd[1823]: time="2025-01-13T22:25:25.740142198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 22:25:25.751264 containerd[1823]: time="2025-01-13T22:25:25.751246810Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 22:25:26.270853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303362170.mount: Deactivated successfully. Jan 13 22:25:26.272120 containerd[1823]: time="2025-01-13T22:25:26.272102250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:26.272344 containerd[1823]: time="2025-01-13T22:25:26.272329538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 22:25:26.272757 containerd[1823]: time="2025-01-13T22:25:26.272742963Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:26.274048 containerd[1823]: time="2025-01-13T22:25:26.274034806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:26.274569 containerd[1823]: time="2025-01-13T22:25:26.274554939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 523.288288ms" Jan 13 22:25:26.274621 containerd[1823]: time="2025-01-13T22:25:26.274571221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 22:25:26.286243 containerd[1823]: time="2025-01-13T22:25:26.286222954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 22:25:26.769174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 22:25:26.782631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:26.783768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436840928.mount: Deactivated successfully. Jan 13 22:25:26.984307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:26.986541 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:25:27.008926 kubelet[2529]: E0113 22:25:27.008836 2529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:25:27.010031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:25:27.010106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:25:28.289934 containerd[1823]: time="2025-01-13T22:25:28.289878315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:28.290144 containerd[1823]: time="2025-01-13T22:25:28.290082958Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 22:25:28.290585 containerd[1823]: time="2025-01-13T22:25:28.290541800Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:28.292197 containerd[1823]: time="2025-01-13T22:25:28.292156505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:28.292888 containerd[1823]: time="2025-01-13T22:25:28.292844880Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.006601171s" Jan 13 22:25:28.292888 containerd[1823]: time="2025-01-13T22:25:28.292862079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 22:25:30.004237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:30.015762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:30.028106 systemd[1]: Reloading requested from client PID 2723 ('systemctl') (unit session-11.scope)... Jan 13 22:25:30.028113 systemd[1]: Reloading... Jan 13 22:25:30.077513 zram_generator::config[2762]: No configuration found. Jan 13 22:25:30.144512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:25:30.203396 systemd[1]: Reloading finished in 175 ms. Jan 13 22:25:30.239115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:30.240064 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:30.241450 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 22:25:30.241592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:30.242389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:30.421766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:30.424068 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:25:30.446686 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:25:30.446686 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:25:30.446686 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:25:30.446952 kubelet[2832]: I0113 22:25:30.446739 2832 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:25:30.683188 kubelet[2832]: I0113 22:25:30.683125 2832 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 22:25:30.683188 kubelet[2832]: I0113 22:25:30.683137 2832 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:25:30.683289 kubelet[2832]: I0113 22:25:30.683246 2832 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 22:25:30.699340 kubelet[2832]: E0113 22:25:30.699297 2832 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.202.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.699788 kubelet[2832]: I0113 22:25:30.699750 2832 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:25:30.714011 kubelet[2832]: I0113 22:25:30.713971 2832 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:25:30.715109 kubelet[2832]: I0113 22:25:30.715072 2832 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:25:30.715221 kubelet[2832]: I0113 22:25:30.715175 2832 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 22:25:30.715221 kubelet[2832]: I0113 22:25:30.715197 2832 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:25:30.715221 kubelet[2832]: I0113 22:25:30.715203 2832 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 22:25:30.715311 kubelet[2832]: I0113 22:25:30.715255 2832 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:25:30.715311 kubelet[2832]: I0113 22:25:30.715309 2832 kubelet.go:396] "Attempting to sync node with API server" Jan 13 22:25:30.715346 kubelet[2832]: I0113 22:25:30.715317 2832 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:25:30.715346 kubelet[2832]: I0113 22:25:30.715329 2832 kubelet.go:312] "Adding apiserver pod source" Jan 13 22:25:30.715346 kubelet[2832]: I0113 22:25:30.715337 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:25:30.716100 kubelet[2832]: I0113 22:25:30.716089 2832 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:25:30.716617 kubelet[2832]: W0113 22:25:30.716596 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.75.202.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.716681 kubelet[2832]: E0113 22:25:30.716623 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.202.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.718034 kubelet[2832]: W0113 22:25:30.717988 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.75.202.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8862dc3d2a&limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.718034 kubelet[2832]: E0113 22:25:30.718007 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.202.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8862dc3d2a&limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.718472 kubelet[2832]: I0113 22:25:30.718435 2832 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:25:30.719290 kubelet[2832]: W0113 22:25:30.719252 2832 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 22:25:30.719613 kubelet[2832]: I0113 22:25:30.719570 2832 server.go:1256] "Started kubelet" Jan 13 22:25:30.719645 kubelet[2832]: I0113 22:25:30.719635 2832 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:25:30.719713 kubelet[2832]: I0113 22:25:30.719707 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:25:30.719967 kubelet[2832]: I0113 22:25:30.719960 2832 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:25:30.722389 kubelet[2832]: I0113 22:25:30.722378 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:25:30.722760 kubelet[2832]: I0113 22:25:30.722726 2832 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 22:25:30.722872 kubelet[2832]: I0113 22:25:30.722864 2832 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 22:25:30.722961 kubelet[2832]: I0113 22:25:30.722954 2832 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 22:25:30.723208 kubelet[2832]: E0113 22:25:30.723192 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-8862dc3d2a?timeout=10s\": dial tcp 147.75.202.79:6443: connect: connection refused" interval="200ms" Jan 13 22:25:30.723272 kubelet[2832]: W0113 22:25:30.723188 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.75.202.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.723352 kubelet[2832]: E0113 22:25:30.723343 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.202.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.724405 kubelet[2832]: I0113 22:25:30.724372 2832 server.go:461] "Adding debug handlers to kubelet server" Jan 13 22:25:30.724509 kubelet[2832]: I0113 22:25:30.724497 2832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:25:30.724910 kubelet[2832]: I0113 22:25:30.724900 2832 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:25:30.724910 kubelet[2832]: I0113 22:25:30.724909 2832 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:25:30.725051 kubelet[2832]: E0113 22:25:30.725042 2832 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:25:30.725512 kubelet[2832]: E0113 22:25:30.725501 2832 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.79:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.79:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-8862dc3d2a.181a60e225e79f31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-8862dc3d2a,UID:ci-4081.3.0-a-8862dc3d2a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-8862dc3d2a,},FirstTimestamp:2025-01-13 22:25:30.719559473 +0000 UTC m=+0.293345233,LastTimestamp:2025-01-13 22:25:30.719559473 +0000 UTC m=+0.293345233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-8862dc3d2a,}" Jan 13 22:25:30.731168 kubelet[2832]: I0113 22:25:30.731158 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:25:30.731697 kubelet[2832]: I0113 22:25:30.731688 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:25:30.731738 kubelet[2832]: I0113 22:25:30.731705 2832 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:25:30.731738 kubelet[2832]: I0113 22:25:30.731734 2832 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 22:25:30.731777 kubelet[2832]: E0113 22:25:30.731765 2832 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:25:30.731939 kubelet[2832]: W0113 22:25:30.731922 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.75.202.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.731979 kubelet[2832]: E0113 22:25:30.731941 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.202.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:30.832514 kubelet[2832]: E0113 22:25:30.832397 2832 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 22:25:30.867651 kubelet[2832]: I0113 22:25:30.867584 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:30.868328 kubelet[2832]: E0113 22:25:30.868272 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.79:6443/api/v1/nodes\": dial tcp 147.75.202.79:6443: connect: connection refused" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:30.868956 kubelet[2832]: I0113 22:25:30.868869 2832 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:25:30.868956 kubelet[2832]: I0113 22:25:30.868911 2832 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:25:30.868956 kubelet[2832]: I0113 22:25:30.868947 2832 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:25:30.870684 kubelet[2832]: I0113 22:25:30.870666 2832 policy_none.go:49] "None policy: Start" Jan 13 22:25:30.871269 kubelet[2832]: I0113 22:25:30.871225 2832 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:25:30.871269 kubelet[2832]: I0113 22:25:30.871248 2832 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:25:30.874095 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 22:25:30.891346 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 22:25:30.899505 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 22:25:30.900405 kubelet[2832]: I0113 22:25:30.900367 2832 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:25:30.900592 kubelet[2832]: I0113 22:25:30.900543 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:25:30.901282 kubelet[2832]: E0113 22:25:30.901245 2832 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:30.924014 kubelet[2832]: E0113 22:25:30.923982 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-8862dc3d2a?timeout=10s\": dial tcp 147.75.202.79:6443: connect: connection refused" interval="400ms" Jan 13 22:25:31.033397 kubelet[2832]: I0113 22:25:31.033162 2832 topology_manager.go:215] "Topology Admit Handler" podUID="05a43a4eba475d263d9fe0a0daf92b2c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.039453 kubelet[2832]: I0113 22:25:31.039410 2832 topology_manager.go:215] "Topology Admit Handler" podUID="6e3a576e0539e452a4882d4f4a72090e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.040245 kubelet[2832]: I0113 22:25:31.040237 2832 topology_manager.go:215] "Topology Admit Handler" podUID="d2e9eaf46d0ffc6cfe8a090a07bde883" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.043558 systemd[1]: Created slice kubepods-burstable-pod05a43a4eba475d263d9fe0a0daf92b2c.slice - libcontainer container kubepods-burstable-pod05a43a4eba475d263d9fe0a0daf92b2c.slice. Jan 13 22:25:31.070788 kubelet[2832]: I0113 22:25:31.070738 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.071012 kubelet[2832]: E0113 22:25:31.070974 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.79:6443/api/v1/nodes\": dial tcp 147.75.202.79:6443: connect: connection refused" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.077566 systemd[1]: Created slice kubepods-burstable-pod6e3a576e0539e452a4882d4f4a72090e.slice - libcontainer container kubepods-burstable-pod6e3a576e0539e452a4882d4f4a72090e.slice. Jan 13 22:25:31.102645 systemd[1]: Created slice kubepods-burstable-podd2e9eaf46d0ffc6cfe8a090a07bde883.slice - libcontainer container kubepods-burstable-podd2e9eaf46d0ffc6cfe8a090a07bde883.slice. Jan 13 22:25:31.125018 kubelet[2832]: I0113 22:25:31.124913 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05a43a4eba475d263d9fe0a0daf92b2c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-8862dc3d2a\" (UID: \"05a43a4eba475d263d9fe0a0daf92b2c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125272 kubelet[2832]: I0113 22:25:31.125080 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125272 kubelet[2832]: I0113 22:25:31.125166 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125272 kubelet[2832]: I0113 22:25:31.125243 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2e9eaf46d0ffc6cfe8a090a07bde883-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-8862dc3d2a\" (UID: \"d2e9eaf46d0ffc6cfe8a090a07bde883\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125548 kubelet[2832]: I0113 22:25:31.125316 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05a43a4eba475d263d9fe0a0daf92b2c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8862dc3d2a\" (UID: \"05a43a4eba475d263d9fe0a0daf92b2c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125548 kubelet[2832]: I0113 22:25:31.125389 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125548 kubelet[2832]: I0113 22:25:31.125481 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125793 kubelet[2832]: I0113 22:25:31.125556 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.125793 kubelet[2832]: I0113 22:25:31.125644 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05a43a4eba475d263d9fe0a0daf92b2c-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8862dc3d2a\" (UID: \"05a43a4eba475d263d9fe0a0daf92b2c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.325866 kubelet[2832]: E0113 22:25:31.325639 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-8862dc3d2a?timeout=10s\": dial tcp 147.75.202.79:6443: connect: connection refused" interval="800ms" Jan 13 22:25:31.377765 containerd[1823]: time="2025-01-13T22:25:31.377627870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-8862dc3d2a,Uid:05a43a4eba475d263d9fe0a0daf92b2c,Namespace:kube-system,Attempt:0,}" Jan 13 22:25:31.399290 containerd[1823]: time="2025-01-13T22:25:31.399232006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-8862dc3d2a,Uid:6e3a576e0539e452a4882d4f4a72090e,Namespace:kube-system,Attempt:0,}" Jan 13 22:25:31.407723 containerd[1823]: time="2025-01-13T22:25:31.407673119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-8862dc3d2a,Uid:d2e9eaf46d0ffc6cfe8a090a07bde883,Namespace:kube-system,Attempt:0,}" Jan 13 22:25:31.473412 kubelet[2832]: I0113 22:25:31.473364 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.473665 kubelet[2832]: E0113 22:25:31.473610 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.79:6443/api/v1/nodes\": dial tcp 147.75.202.79:6443: connect: connection refused" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:31.531271 kubelet[2832]: W0113 22:25:31.531198 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.75.202.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8862dc3d2a&limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:31.531271 kubelet[2832]: E0113 22:25:31.531240 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.202.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8862dc3d2a&limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:31.859647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645057390.mount: Deactivated successfully. Jan 13 22:25:31.860989 containerd[1823]: time="2025-01-13T22:25:31.860942824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:25:31.861280 containerd[1823]: time="2025-01-13T22:25:31.861227904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 22:25:31.861979 containerd[1823]: time="2025-01-13T22:25:31.861964415Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:25:31.862811 containerd[1823]: time="2025-01-13T22:25:31.862779557Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:25:31.862972 containerd[1823]: time="2025-01-13T22:25:31.862918499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:25:31.863595 containerd[1823]: time="2025-01-13T22:25:31.863577716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:25:31.863631 containerd[1823]: time="2025-01-13T22:25:31.863601087Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:25:31.865253 containerd[1823]: time="2025-01-13T22:25:31.865238302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.457642ms" Jan 13 22:25:31.865913 containerd[1823]: time="2025-01-13T22:25:31.865868839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:25:31.866389 containerd[1823]: time="2025-01-13T22:25:31.866354038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.65396ms" Jan 13 22:25:31.867932 containerd[1823]: time="2025-01-13T22:25:31.867882587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 468.604024ms" Jan 13 22:25:31.872654 kubelet[2832]: W0113 22:25:31.872593 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.75.202.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:31.872654 kubelet[2832]: E0113 22:25:31.872632 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.202.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.79:6443: connect: connection refused Jan 13 22:25:31.965205 containerd[1823]: time="2025-01-13T22:25:31.965154015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:25:31.965273 containerd[1823]: time="2025-01-13T22:25:31.965216199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:25:31.965396 containerd[1823]: time="2025-01-13T22:25:31.965201028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:25:31.965464 containerd[1823]: time="2025-01-13T22:25:31.965396594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:31.965464 containerd[1823]: time="2025-01-13T22:25:31.965185168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:25:31.965464 containerd[1823]: time="2025-01-13T22:25:31.965413925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:25:31.965464 containerd[1823]: time="2025-01-13T22:25:31.965423804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:31.965546 containerd[1823]: time="2025-01-13T22:25:31.965452111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:31.965546 containerd[1823]: time="2025-01-13T22:25:31.965442958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:25:31.965546 containerd[1823]: time="2025-01-13T22:25:31.965465518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:31.965546 containerd[1823]: time="2025-01-13T22:25:31.965469901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:31.965546 containerd[1823]: time="2025-01-13T22:25:31.965506289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:31.992732 systemd[1]: Started cri-containerd-696a6a1cd4d2d4aa051cb73b78818d246d50451b436063c02ca330230c6279ac.scope - libcontainer container 696a6a1cd4d2d4aa051cb73b78818d246d50451b436063c02ca330230c6279ac. Jan 13 22:25:31.993466 systemd[1]: Started cri-containerd-988ae5ce04a4daab97faa57302f926a425acb6bcbe0eab7e9119831ba1b54a0a.scope - libcontainer container 988ae5ce04a4daab97faa57302f926a425acb6bcbe0eab7e9119831ba1b54a0a. Jan 13 22:25:31.994183 systemd[1]: Started cri-containerd-c6da85dd5f934b5b525731404e6e7f7319f071e8f89a361de647611c6d684db4.scope - libcontainer container c6da85dd5f934b5b525731404e6e7f7319f071e8f89a361de647611c6d684db4. Jan 13 22:25:32.019151 containerd[1823]: time="2025-01-13T22:25:32.019116966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-8862dc3d2a,Uid:d2e9eaf46d0ffc6cfe8a090a07bde883,Namespace:kube-system,Attempt:0,} returns sandbox id \"988ae5ce04a4daab97faa57302f926a425acb6bcbe0eab7e9119831ba1b54a0a\"" Jan 13 22:25:32.019548 containerd[1823]: time="2025-01-13T22:25:32.019532034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-8862dc3d2a,Uid:05a43a4eba475d263d9fe0a0daf92b2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"696a6a1cd4d2d4aa051cb73b78818d246d50451b436063c02ca330230c6279ac\"" Jan 13 22:25:32.020327 containerd[1823]: time="2025-01-13T22:25:32.020310637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-8862dc3d2a,Uid:6e3a576e0539e452a4882d4f4a72090e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6da85dd5f934b5b525731404e6e7f7319f071e8f89a361de647611c6d684db4\"" Jan 13 22:25:32.021379 containerd[1823]: time="2025-01-13T22:25:32.021359607Z" level=info msg="CreateContainer within sandbox \"988ae5ce04a4daab97faa57302f926a425acb6bcbe0eab7e9119831ba1b54a0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 22:25:32.021429 containerd[1823]: time="2025-01-13T22:25:32.021366882Z" level=info msg="CreateContainer within sandbox \"696a6a1cd4d2d4aa051cb73b78818d246d50451b436063c02ca330230c6279ac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 22:25:32.021531 containerd[1823]: time="2025-01-13T22:25:32.021515115Z" level=info msg="CreateContainer within sandbox \"c6da85dd5f934b5b525731404e6e7f7319f071e8f89a361de647611c6d684db4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 22:25:32.028931 containerd[1823]: time="2025-01-13T22:25:32.028878588Z" level=info msg="CreateContainer within sandbox \"696a6a1cd4d2d4aa051cb73b78818d246d50451b436063c02ca330230c6279ac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"619261b4308eca79803006c3fc81999820358b1f7413a01fc4de62eff789711d\"" Jan 13 22:25:32.029172 containerd[1823]: time="2025-01-13T22:25:32.029143238Z" level=info msg="StartContainer for \"619261b4308eca79803006c3fc81999820358b1f7413a01fc4de62eff789711d\"" Jan 13 22:25:32.030267 containerd[1823]: time="2025-01-13T22:25:32.030216582Z" level=info msg="CreateContainer within sandbox \"c6da85dd5f934b5b525731404e6e7f7319f071e8f89a361de647611c6d684db4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff93cf2ad127166ef1e29e5615f3cdea102a3a1497caad5ddb719cb3ef9d9132\"" Jan 13 22:25:32.030381 containerd[1823]: time="2025-01-13T22:25:32.030368554Z" level=info msg="CreateContainer within sandbox \"988ae5ce04a4daab97faa57302f926a425acb6bcbe0eab7e9119831ba1b54a0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df6355c1b53ad26073b26d7f9aa35b1e7bccc533bcdc9c62255a559b0585a223\"" Jan 13 22:25:32.030407 containerd[1823]: time="2025-01-13T22:25:32.030375939Z" level=info msg="StartContainer for \"ff93cf2ad127166ef1e29e5615f3cdea102a3a1497caad5ddb719cb3ef9d9132\"" Jan 13 22:25:32.030572 containerd[1823]: time="2025-01-13T22:25:32.030524022Z" level=info msg="StartContainer for \"df6355c1b53ad26073b26d7f9aa35b1e7bccc533bcdc9c62255a559b0585a223\"" Jan 13 22:25:32.054667 systemd[1]: Started cri-containerd-619261b4308eca79803006c3fc81999820358b1f7413a01fc4de62eff789711d.scope - libcontainer container 619261b4308eca79803006c3fc81999820358b1f7413a01fc4de62eff789711d. Jan 13 22:25:32.055313 systemd[1]: Started cri-containerd-df6355c1b53ad26073b26d7f9aa35b1e7bccc533bcdc9c62255a559b0585a223.scope - libcontainer container df6355c1b53ad26073b26d7f9aa35b1e7bccc533bcdc9c62255a559b0585a223. Jan 13 22:25:32.055912 systemd[1]: Started cri-containerd-ff93cf2ad127166ef1e29e5615f3cdea102a3a1497caad5ddb719cb3ef9d9132.scope - libcontainer container ff93cf2ad127166ef1e29e5615f3cdea102a3a1497caad5ddb719cb3ef9d9132. Jan 13 22:25:32.080067 containerd[1823]: time="2025-01-13T22:25:32.080040488Z" level=info msg="StartContainer for \"619261b4308eca79803006c3fc81999820358b1f7413a01fc4de62eff789711d\" returns successfully" Jan 13 22:25:32.080067 containerd[1823]: time="2025-01-13T22:25:32.080065375Z" level=info msg="StartContainer for \"df6355c1b53ad26073b26d7f9aa35b1e7bccc533bcdc9c62255a559b0585a223\" returns successfully" Jan 13 22:25:32.081033 containerd[1823]: time="2025-01-13T22:25:32.080949035Z" level=info msg="StartContainer for \"ff93cf2ad127166ef1e29e5615f3cdea102a3a1497caad5ddb719cb3ef9d9132\" returns successfully" Jan 13 22:25:32.275036 kubelet[2832]: I0113 22:25:32.274975 2832 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:32.706914 kubelet[2832]: E0113 22:25:32.706865 2832 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-8862dc3d2a\" not found" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:32.818817 kubelet[2832]: I0113 22:25:32.818734 2832 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:32.832747 kubelet[2832]: E0113 22:25:32.832714 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:32.933215 kubelet[2832]: E0113 22:25:32.933111 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.034478 kubelet[2832]: E0113 22:25:33.034235 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.135479 kubelet[2832]: E0113 22:25:33.135407 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.236151 kubelet[2832]: E0113 22:25:33.236028 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.336692 kubelet[2832]: E0113 22:25:33.336590 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.437826 kubelet[2832]: E0113 22:25:33.437728 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.539087 kubelet[2832]: E0113 22:25:33.538964 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.639321 kubelet[2832]: E0113 22:25:33.639216 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.740187 kubelet[2832]: E0113 22:25:33.740135 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.841332 kubelet[2832]: E0113 22:25:33.841265 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:33.942333 kubelet[2832]: E0113 22:25:33.942119 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:34.042728 kubelet[2832]: E0113 22:25:34.042641 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:34.142977 kubelet[2832]: E0113 22:25:34.142892 2832 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8862dc3d2a\" not found" Jan 13 22:25:34.717579 kubelet[2832]: I0113 22:25:34.717513 2832 apiserver.go:52] "Watching apiserver" Jan 13 22:25:34.724134 kubelet[2832]: I0113 22:25:34.724090 2832 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 22:25:35.595396 systemd[1]: Reloading requested from client PID 3149 ('systemctl') (unit session-11.scope)... Jan 13 22:25:35.595403 systemd[1]: Reloading... Jan 13 22:25:35.634511 zram_generator::config[3188]: No configuration found. Jan 13 22:25:35.710569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:25:35.777408 systemd[1]: Reloading finished in 181 ms. Jan 13 22:25:35.802396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:35.810773 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 22:25:35.810889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:35.825545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:25:36.038970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:25:36.043254 (kubelet)[3252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:25:36.066594 kubelet[3252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:25:36.066594 kubelet[3252]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:25:36.066594 kubelet[3252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:25:36.066897 kubelet[3252]: I0113 22:25:36.066621 3252 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:25:36.068870 kubelet[3252]: I0113 22:25:36.068829 3252 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 22:25:36.068870 kubelet[3252]: I0113 22:25:36.068840 3252 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:25:36.068970 kubelet[3252]: I0113 22:25:36.068932 3252 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 22:25:36.069783 kubelet[3252]: I0113 22:25:36.069747 3252 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 22:25:36.070842 kubelet[3252]: I0113 22:25:36.070832 3252 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:25:36.079376 kubelet[3252]: I0113 22:25:36.079359 3252 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:25:36.079490 kubelet[3252]: I0113 22:25:36.079483 3252 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:25:36.079635 kubelet[3252]: I0113 22:25:36.079600 3252 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 22:25:36.079635 kubelet[3252]: I0113 22:25:36.079614 3252 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:25:36.079635 kubelet[3252]: I0113 22:25:36.079620 3252 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 22:25:36.079635 kubelet[3252]: I0113 22:25:36.079638 3252 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:25:36.079750 kubelet[3252]: I0113 22:25:36.079687 3252 kubelet.go:396] "Attempting to sync node with API server" Jan 13 22:25:36.079750 kubelet[3252]: I0113 22:25:36.079695 3252 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:25:36.079750 kubelet[3252]: I0113 22:25:36.079709 3252 kubelet.go:312] "Adding apiserver pod source" Jan 13 22:25:36.079750 kubelet[3252]: I0113 22:25:36.079717 3252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:25:36.080024 kubelet[3252]: I0113 22:25:36.080005 3252 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:25:36.080139 kubelet[3252]: I0113 22:25:36.080132 3252 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:25:36.080443 kubelet[3252]: I0113 22:25:36.080434 3252 server.go:1256] "Started kubelet" Jan 13 22:25:36.080517 kubelet[3252]: I0113 22:25:36.080502 3252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:25:36.080517 kubelet[3252]: I0113 22:25:36.080505 3252 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:25:36.080919 kubelet[3252]: I0113 22:25:36.080908 3252 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:25:36.082124 kubelet[3252]: I0113 22:25:36.082030 3252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:25:36.082124 kubelet[3252]: I0113 22:25:36.082080 3252 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 22:25:36.082124 kubelet[3252]: I0113 22:25:36.082117 3252 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 22:25:36.082255 kubelet[3252]: I0113 22:25:36.082226 3252 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 22:25:36.082326 kubelet[3252]: I0113 22:25:36.082315 3252 server.go:461] "Adding debug handlers to kubelet server" Jan 13 22:25:36.082556 kubelet[3252]: I0113 22:25:36.082519 3252 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:25:36.082596 kubelet[3252]: I0113 22:25:36.082584 3252 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:25:36.082626 kubelet[3252]: E0113 22:25:36.082591 3252 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:25:36.083116 kubelet[3252]: I0113 22:25:36.083106 3252 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:25:36.087693 kubelet[3252]: I0113 22:25:36.087675 3252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:25:36.088220 kubelet[3252]: I0113 22:25:36.088213 3252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:25:36.088255 kubelet[3252]: I0113 22:25:36.088227 3252 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:25:36.088255 kubelet[3252]: I0113 22:25:36.088237 3252 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 22:25:36.088285 kubelet[3252]: E0113 22:25:36.088261 3252 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:25:36.099460 kubelet[3252]: I0113 22:25:36.099413 3252 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:25:36.099460 kubelet[3252]: I0113 22:25:36.099425 3252 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:25:36.099460 kubelet[3252]: I0113 22:25:36.099433 3252 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:25:36.099568 kubelet[3252]: I0113 22:25:36.099523 3252 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 22:25:36.099568 kubelet[3252]: I0113 22:25:36.099536 3252 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 22:25:36.099568 kubelet[3252]: I0113 22:25:36.099540 3252 policy_none.go:49] "None policy: Start" Jan 13 22:25:36.099802 kubelet[3252]: I0113 22:25:36.099765 3252 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:25:36.099802 kubelet[3252]: I0113 22:25:36.099781 3252 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:25:36.099929 kubelet[3252]: I0113 22:25:36.099889 3252 state_mem.go:75] "Updated machine memory state" Jan 13 22:25:36.101909 kubelet[3252]: I0113 22:25:36.101872 3252 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:25:36.102036 kubelet[3252]: I0113 22:25:36.101999 3252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:25:36.184618 kubelet[3252]: I0113 22:25:36.184602 3252 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.188347 kubelet[3252]: I0113 22:25:36.188308 3252 topology_manager.go:215] "Topology Admit Handler" podUID="6e3a576e0539e452a4882d4f4a72090e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.188347 kubelet[3252]: I0113 22:25:36.188348 3252 topology_manager.go:215] "Topology Admit Handler" podUID="d2e9eaf46d0ffc6cfe8a090a07bde883" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.188416 kubelet[3252]: I0113 22:25:36.188367 3252 topology_manager.go:215] "Topology Admit Handler" podUID="05a43a4eba475d263d9fe0a0daf92b2c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.188814 kubelet[3252]: I0113 22:25:36.188777 3252 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.188814 kubelet[3252]: I0113 22:25:36.188812 3252 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.191488 kubelet[3252]: W0113 22:25:36.191464 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:25:36.191915 kubelet[3252]: W0113 22:25:36.191875 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:25:36.191973 kubelet[3252]: W0113 22:25:36.191946 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:25:36.383287 kubelet[3252]: I0113 22:25:36.383243 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383287 kubelet[3252]: I0113 22:25:36.383272 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383407 kubelet[3252]: I0113 22:25:36.383291 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2e9eaf46d0ffc6cfe8a090a07bde883-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-8862dc3d2a\" (UID: \"d2e9eaf46d0ffc6cfe8a090a07bde883\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383407 kubelet[3252]: I0113 22:25:36.383303 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05a43a4eba475d263d9fe0a0daf92b2c-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8862dc3d2a\" (UID: \"05a43a4eba475d263d9fe0a0daf92b2c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383407 kubelet[3252]: I0113 22:25:36.383334 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05a43a4eba475d263d9fe0a0daf92b2c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8862dc3d2a\" (UID: \"05a43a4eba475d263d9fe0a0daf92b2c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383407 kubelet[3252]: I0113 22:25:36.383382 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05a43a4eba475d263d9fe0a0daf92b2c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-8862dc3d2a\" (UID: \"05a43a4eba475d263d9fe0a0daf92b2c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383407 kubelet[3252]: I0113 22:25:36.383404 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383529 kubelet[3252]: I0113 22:25:36.383417 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:36.383529 kubelet[3252]: I0113 22:25:36.383433 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e3a576e0539e452a4882d4f4a72090e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" (UID: \"6e3a576e0539e452a4882d4f4a72090e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:37.080235 kubelet[3252]: I0113 22:25:37.080180 3252 apiserver.go:52] "Watching apiserver" Jan 13 22:25:37.082575 kubelet[3252]: I0113 22:25:37.082557 3252 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 22:25:37.094226 kubelet[3252]: W0113 22:25:37.094208 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:25:37.094317 kubelet[3252]: E0113 22:25:37.094252 3252 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-8862dc3d2a\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:37.094434 kubelet[3252]: W0113 22:25:37.094428 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:25:37.094434 kubelet[3252]: W0113 22:25:37.094431 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:25:37.094490 kubelet[3252]: E0113 22:25:37.094463 3252 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-8862dc3d2a\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:37.094490 kubelet[3252]: E0113 22:25:37.094467 3252 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-8862dc3d2a\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-8862dc3d2a" Jan 13 22:25:37.105101 kubelet[3252]: I0113 22:25:37.105054 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-8862dc3d2a" podStartSLOduration=1.105027501 podStartE2EDuration="1.105027501s" podCreationTimestamp="2025-01-13 22:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:25:37.101135231 +0000 UTC m=+1.055609715" watchObservedRunningTime="2025-01-13 22:25:37.105027501 +0000 UTC m=+1.059502156" Jan 13 22:25:37.108772 kubelet[3252]: I0113 22:25:37.108759 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8862dc3d2a" podStartSLOduration=1.108738433 podStartE2EDuration="1.108738433s" podCreationTimestamp="2025-01-13 22:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:25:37.105102218 +0000 UTC m=+1.059576701" watchObservedRunningTime="2025-01-13 22:25:37.108738433 +0000 UTC m=+1.063212914" Jan 13 22:25:37.113618 kubelet[3252]: I0113 22:25:37.113596 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-8862dc3d2a" podStartSLOduration=1.113566053 podStartE2EDuration="1.113566053s" podCreationTimestamp="2025-01-13 22:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:25:37.108821167 +0000 UTC m=+1.063295650" watchObservedRunningTime="2025-01-13 22:25:37.113566053 +0000 UTC m=+1.068040542" Jan 13 22:25:39.762746 sudo[2101]: pam_unix(sudo:session): session closed for user root Jan 13 22:25:39.763599 sshd[2098]: pam_unix(sshd:session): session closed for user core Jan 13 22:25:39.765021 systemd[1]: sshd@8-147.75.202.79:22-139.178.89.65:39590.service: Deactivated successfully. Jan 13 22:25:39.765961 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 22:25:39.766042 systemd[1]: session-11.scope: Consumed 2.964s CPU time, 201.2M memory peak, 0B memory swap peak. Jan 13 22:25:39.766737 systemd-logind[1813]: Session 11 logged out. Waiting for processes to exit. Jan 13 22:25:39.767203 systemd-logind[1813]: Removed session 11. Jan 13 22:25:49.381091 update_engine[1818]: I20250113 22:25:49.380947 1818 update_attempter.cc:509] Updating boot flags... Jan 13 22:25:49.422456 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3422) Jan 13 22:25:49.452468 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3423) Jan 13 22:25:49.482463 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3423) Jan 13 22:25:49.561762 kubelet[3252]: I0113 22:25:49.561673 3252 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 22:25:49.562689 containerd[1823]: time="2025-01-13T22:25:49.562410629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 22:25:49.563305 kubelet[3252]: I0113 22:25:49.562872 3252 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 22:25:50.473772 kubelet[3252]: I0113 22:25:50.473706 3252 topology_manager.go:215] "Topology Admit Handler" podUID="c7d52a9b-633a-4ead-8c0d-af3a9daaeaff" podNamespace="kube-system" podName="kube-proxy-xchbj" Jan 13 22:25:50.488355 systemd[1]: Created slice kubepods-besteffort-podc7d52a9b_633a_4ead_8c0d_af3a9daaeaff.slice - libcontainer container kubepods-besteffort-podc7d52a9b_633a_4ead_8c0d_af3a9daaeaff.slice. Jan 13 22:25:50.579357 kubelet[3252]: I0113 22:25:50.579255 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d52a9b-633a-4ead-8c0d-af3a9daaeaff-xtables-lock\") pod \"kube-proxy-xchbj\" (UID: \"c7d52a9b-633a-4ead-8c0d-af3a9daaeaff\") " pod="kube-system/kube-proxy-xchbj" Jan 13 22:25:50.580154 kubelet[3252]: I0113 22:25:50.579426 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d52a9b-633a-4ead-8c0d-af3a9daaeaff-lib-modules\") pod \"kube-proxy-xchbj\" (UID: \"c7d52a9b-633a-4ead-8c0d-af3a9daaeaff\") " pod="kube-system/kube-proxy-xchbj" Jan 13 22:25:50.580154 kubelet[3252]: I0113 22:25:50.579596 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29rl\" (UniqueName: \"kubernetes.io/projected/c7d52a9b-633a-4ead-8c0d-af3a9daaeaff-kube-api-access-v29rl\") pod \"kube-proxy-xchbj\" (UID: \"c7d52a9b-633a-4ead-8c0d-af3a9daaeaff\") " pod="kube-system/kube-proxy-xchbj" Jan 13 22:25:50.580154 kubelet[3252]: I0113 22:25:50.579666 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7d52a9b-633a-4ead-8c0d-af3a9daaeaff-kube-proxy\") pod \"kube-proxy-xchbj\" (UID: \"c7d52a9b-633a-4ead-8c0d-af3a9daaeaff\") " pod="kube-system/kube-proxy-xchbj" Jan 13 22:25:50.696084 kubelet[3252]: I0113 22:25:50.696014 3252 topology_manager.go:215] "Topology Admit Handler" podUID="ecec6627-5497-4b63-a284-e7f9ae35d314" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-5k6gj" Jan 13 22:25:50.710932 systemd[1]: Created slice kubepods-besteffort-podecec6627_5497_4b63_a284_e7f9ae35d314.slice - libcontainer container kubepods-besteffort-podecec6627_5497_4b63_a284_e7f9ae35d314.slice. Jan 13 22:25:50.781486 kubelet[3252]: I0113 22:25:50.781249 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnkrq\" (UniqueName: \"kubernetes.io/projected/ecec6627-5497-4b63-a284-e7f9ae35d314-kube-api-access-tnkrq\") pod \"tigera-operator-c7ccbd65-5k6gj\" (UID: \"ecec6627-5497-4b63-a284-e7f9ae35d314\") " pod="tigera-operator/tigera-operator-c7ccbd65-5k6gj" Jan 13 22:25:50.781486 kubelet[3252]: I0113 22:25:50.781354 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ecec6627-5497-4b63-a284-e7f9ae35d314-var-lib-calico\") pod \"tigera-operator-c7ccbd65-5k6gj\" (UID: \"ecec6627-5497-4b63-a284-e7f9ae35d314\") " pod="tigera-operator/tigera-operator-c7ccbd65-5k6gj" Jan 13 22:25:50.810920 containerd[1823]: time="2025-01-13T22:25:50.810802159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xchbj,Uid:c7d52a9b-633a-4ead-8c0d-af3a9daaeaff,Namespace:kube-system,Attempt:0,}" Jan 13 22:25:50.821110 containerd[1823]: time="2025-01-13T22:25:50.821074640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:25:50.821110 containerd[1823]: time="2025-01-13T22:25:50.821098185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:25:50.821110 containerd[1823]: time="2025-01-13T22:25:50.821104760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:50.821222 containerd[1823]: time="2025-01-13T22:25:50.821139209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:50.844728 systemd[1]: Started cri-containerd-c624249588337d4ff3adca9be1c39adeff5e822c87dccbe858ca86aca9b365e7.scope - libcontainer container c624249588337d4ff3adca9be1c39adeff5e822c87dccbe858ca86aca9b365e7. Jan 13 22:25:50.856667 containerd[1823]: time="2025-01-13T22:25:50.856639090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xchbj,Uid:c7d52a9b-633a-4ead-8c0d-af3a9daaeaff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c624249588337d4ff3adca9be1c39adeff5e822c87dccbe858ca86aca9b365e7\"" Jan 13 22:25:50.858498 containerd[1823]: time="2025-01-13T22:25:50.858480018Z" level=info msg="CreateContainer within sandbox \"c624249588337d4ff3adca9be1c39adeff5e822c87dccbe858ca86aca9b365e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 22:25:50.864615 containerd[1823]: time="2025-01-13T22:25:50.864570577Z" level=info msg="CreateContainer within sandbox \"c624249588337d4ff3adca9be1c39adeff5e822c87dccbe858ca86aca9b365e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a3e0364310b6d877ef0a566ff83b13b98aa17608f1ad8dc796005c8aaec763cc\"" Jan 13 22:25:50.864849 containerd[1823]: time="2025-01-13T22:25:50.864835530Z" level=info msg="StartContainer for \"a3e0364310b6d877ef0a566ff83b13b98aa17608f1ad8dc796005c8aaec763cc\"" Jan 13 22:25:50.895717 systemd[1]: Started cri-containerd-a3e0364310b6d877ef0a566ff83b13b98aa17608f1ad8dc796005c8aaec763cc.scope - libcontainer container a3e0364310b6d877ef0a566ff83b13b98aa17608f1ad8dc796005c8aaec763cc. Jan 13 22:25:50.914235 containerd[1823]: time="2025-01-13T22:25:50.914201700Z" level=info msg="StartContainer for \"a3e0364310b6d877ef0a566ff83b13b98aa17608f1ad8dc796005c8aaec763cc\" returns successfully" Jan 13 22:25:51.016040 containerd[1823]: time="2025-01-13T22:25:51.015972240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-5k6gj,Uid:ecec6627-5497-4b63-a284-e7f9ae35d314,Namespace:tigera-operator,Attempt:0,}" Jan 13 22:25:51.026813 containerd[1823]: time="2025-01-13T22:25:51.026492803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:25:51.026813 containerd[1823]: time="2025-01-13T22:25:51.026766866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:25:51.026813 containerd[1823]: time="2025-01-13T22:25:51.026776325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:51.026973 containerd[1823]: time="2025-01-13T22:25:51.026833266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:51.053647 systemd[1]: Started cri-containerd-1c1290bac98c296935de32b95d32144ef9e7f06d497d3d88dff819299d6f334e.scope - libcontainer container 1c1290bac98c296935de32b95d32144ef9e7f06d497d3d88dff819299d6f334e. Jan 13 22:25:51.110540 containerd[1823]: time="2025-01-13T22:25:51.110509191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-5k6gj,Uid:ecec6627-5497-4b63-a284-e7f9ae35d314,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1c1290bac98c296935de32b95d32144ef9e7f06d497d3d88dff819299d6f334e\"" Jan 13 22:25:51.111522 containerd[1823]: time="2025-01-13T22:25:51.111501698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 22:25:51.128799 kubelet[3252]: I0113 22:25:51.128779 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xchbj" podStartSLOduration=1.128754471 podStartE2EDuration="1.128754471s" podCreationTimestamp="2025-01-13 22:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:25:51.128670402 +0000 UTC m=+15.083144892" watchObservedRunningTime="2025-01-13 22:25:51.128754471 +0000 UTC m=+15.083228952" Jan 13 22:25:52.824775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333563747.mount: Deactivated successfully. Jan 13 22:25:53.029613 containerd[1823]: time="2025-01-13T22:25:53.029587162Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:53.029857 containerd[1823]: time="2025-01-13T22:25:53.029794968Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764297" Jan 13 22:25:53.030172 containerd[1823]: time="2025-01-13T22:25:53.030156835Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:53.031195 containerd[1823]: time="2025-01-13T22:25:53.031181717Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:53.031663 containerd[1823]: time="2025-01-13T22:25:53.031646218Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.92012197s" Jan 13 22:25:53.031711 containerd[1823]: time="2025-01-13T22:25:53.031665510Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 22:25:53.032516 containerd[1823]: time="2025-01-13T22:25:53.032503095Z" level=info msg="CreateContainer within sandbox \"1c1290bac98c296935de32b95d32144ef9e7f06d497d3d88dff819299d6f334e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 22:25:53.036058 containerd[1823]: time="2025-01-13T22:25:53.036011763Z" level=info msg="CreateContainer within sandbox \"1c1290bac98c296935de32b95d32144ef9e7f06d497d3d88dff819299d6f334e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ff0682f9a92f445b38b1dc2a7e8fa590f66e5b43774275924dac255d2577c8e\"" Jan 13 22:25:53.036256 containerd[1823]: time="2025-01-13T22:25:53.036241168Z" level=info msg="StartContainer for \"4ff0682f9a92f445b38b1dc2a7e8fa590f66e5b43774275924dac255d2577c8e\"" Jan 13 22:25:53.058715 systemd[1]: Started cri-containerd-4ff0682f9a92f445b38b1dc2a7e8fa590f66e5b43774275924dac255d2577c8e.scope - libcontainer container 4ff0682f9a92f445b38b1dc2a7e8fa590f66e5b43774275924dac255d2577c8e. Jan 13 22:25:53.070491 containerd[1823]: time="2025-01-13T22:25:53.070462462Z" level=info msg="StartContainer for \"4ff0682f9a92f445b38b1dc2a7e8fa590f66e5b43774275924dac255d2577c8e\" returns successfully" Jan 13 22:25:53.135034 kubelet[3252]: I0113 22:25:53.135007 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-5k6gj" podStartSLOduration=1.2143197589999999 podStartE2EDuration="3.134968513s" podCreationTimestamp="2025-01-13 22:25:50 +0000 UTC" firstStartedPulling="2025-01-13 22:25:51.111187836 +0000 UTC m=+15.065662329" lastFinishedPulling="2025-01-13 22:25:53.031836599 +0000 UTC m=+16.986311083" observedRunningTime="2025-01-13 22:25:53.134815935 +0000 UTC m=+17.089290449" watchObservedRunningTime="2025-01-13 22:25:53.134968513 +0000 UTC m=+17.089443006" Jan 13 22:25:55.839688 kubelet[3252]: I0113 22:25:55.839630 3252 topology_manager.go:215] "Topology Admit Handler" podUID="de620a6a-320f-462d-81ae-ec3d1655ce72" podNamespace="calico-system" podName="calico-typha-58c444d79d-98dd2" Jan 13 22:25:55.852621 systemd[1]: Created slice kubepods-besteffort-podde620a6a_320f_462d_81ae_ec3d1655ce72.slice - libcontainer container kubepods-besteffort-podde620a6a_320f_462d_81ae_ec3d1655ce72.slice. Jan 13 22:25:55.868930 kubelet[3252]: I0113 22:25:55.868884 3252 topology_manager.go:215] "Topology Admit Handler" podUID="b0a2557c-1652-4a66-9385-3d0f47795ea0" podNamespace="calico-system" podName="calico-node-8kcns" Jan 13 22:25:55.875541 systemd[1]: Created slice kubepods-besteffort-podb0a2557c_1652_4a66_9385_3d0f47795ea0.slice - libcontainer container kubepods-besteffort-podb0a2557c_1652_4a66_9385_3d0f47795ea0.slice. Jan 13 22:25:55.915872 kubelet[3252]: I0113 22:25:55.915779 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-var-lib-calico\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916082 kubelet[3252]: I0113 22:25:55.915969 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgfms\" (UniqueName: \"kubernetes.io/projected/b0a2557c-1652-4a66-9385-3d0f47795ea0-kube-api-access-fgfms\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916082 kubelet[3252]: I0113 22:25:55.916074 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgkkt\" (UniqueName: \"kubernetes.io/projected/de620a6a-320f-462d-81ae-ec3d1655ce72-kube-api-access-tgkkt\") pod \"calico-typha-58c444d79d-98dd2\" (UID: \"de620a6a-320f-462d-81ae-ec3d1655ce72\") " pod="calico-system/calico-typha-58c444d79d-98dd2" Jan 13 22:25:55.916248 kubelet[3252]: I0113 22:25:55.916136 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-policysync\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916248 kubelet[3252]: I0113 22:25:55.916224 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b0a2557c-1652-4a66-9385-3d0f47795ea0-node-certs\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916382 kubelet[3252]: I0113 22:25:55.916349 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-xtables-lock\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916471 kubelet[3252]: I0113 22:25:55.916409 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de620a6a-320f-462d-81ae-ec3d1655ce72-tigera-ca-bundle\") pod \"calico-typha-58c444d79d-98dd2\" (UID: \"de620a6a-320f-462d-81ae-ec3d1655ce72\") " pod="calico-system/calico-typha-58c444d79d-98dd2" Jan 13 22:25:55.916556 kubelet[3252]: I0113 22:25:55.916487 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-lib-modules\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916556 kubelet[3252]: I0113 22:25:55.916549 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-var-run-calico\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916706 kubelet[3252]: I0113 22:25:55.916641 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-cni-net-dir\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916706 kubelet[3252]: I0113 22:25:55.916699 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-flexvol-driver-host\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916838 kubelet[3252]: I0113 22:25:55.916778 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-cni-log-dir\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.916908 kubelet[3252]: I0113 22:25:55.916869 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a2557c-1652-4a66-9385-3d0f47795ea0-tigera-ca-bundle\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.917012 kubelet[3252]: I0113 22:25:55.916936 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de620a6a-320f-462d-81ae-ec3d1655ce72-typha-certs\") pod \"calico-typha-58c444d79d-98dd2\" (UID: \"de620a6a-320f-462d-81ae-ec3d1655ce72\") " pod="calico-system/calico-typha-58c444d79d-98dd2" Jan 13 22:25:55.917012 kubelet[3252]: I0113 22:25:55.917001 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b0a2557c-1652-4a66-9385-3d0f47795ea0-cni-bin-dir\") pod \"calico-node-8kcns\" (UID: \"b0a2557c-1652-4a66-9385-3d0f47795ea0\") " pod="calico-system/calico-node-8kcns" Jan 13 22:25:55.994045 kubelet[3252]: I0113 22:25:55.993937 3252 topology_manager.go:215] "Topology Admit Handler" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" podNamespace="calico-system" podName="csi-node-driver-zncfw" Jan 13 22:25:55.994595 kubelet[3252]: E0113 22:25:55.994554 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zncfw" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" Jan 13 22:25:56.018394 kubelet[3252]: I0113 22:25:56.018326 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95289ad8-9d9d-491a-a4ad-267a7ba7fe8b-kubelet-dir\") pod \"csi-node-driver-zncfw\" (UID: \"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b\") " pod="calico-system/csi-node-driver-zncfw" Jan 13 22:25:56.018890 kubelet[3252]: I0113 22:25:56.018651 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/95289ad8-9d9d-491a-a4ad-267a7ba7fe8b-varrun\") pod \"csi-node-driver-zncfw\" (UID: \"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b\") " pod="calico-system/csi-node-driver-zncfw" Jan 13 22:25:56.019083 kubelet[3252]: I0113 22:25:56.019001 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/95289ad8-9d9d-491a-a4ad-267a7ba7fe8b-socket-dir\") pod \"csi-node-driver-zncfw\" (UID: \"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b\") " pod="calico-system/csi-node-driver-zncfw" Jan 13 22:25:56.019261 kubelet[3252]: I0113 22:25:56.019153 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6pq\" (UniqueName: \"kubernetes.io/projected/95289ad8-9d9d-491a-a4ad-267a7ba7fe8b-kube-api-access-6q6pq\") pod \"csi-node-driver-zncfw\" (UID: \"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b\") " pod="calico-system/csi-node-driver-zncfw" Jan 13 22:25:56.019974 kubelet[3252]: I0113 22:25:56.019895 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/95289ad8-9d9d-491a-a4ad-267a7ba7fe8b-registration-dir\") pod \"csi-node-driver-zncfw\" (UID: \"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b\") " pod="calico-system/csi-node-driver-zncfw" Jan 13 22:25:56.022940 kubelet[3252]: E0113 22:25:56.022883 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.022940 kubelet[3252]: W0113 22:25:56.022939 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.023521 kubelet[3252]: E0113 22:25:56.023017 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.023744 kubelet[3252]: E0113 22:25:56.023687 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.023744 kubelet[3252]: W0113 22:25:56.023733 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.024013 kubelet[3252]: E0113 22:25:56.023799 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.027489 kubelet[3252]: E0113 22:25:56.027417 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.027489 kubelet[3252]: W0113 22:25:56.027489 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.027913 kubelet[3252]: E0113 22:25:56.027557 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.028438 kubelet[3252]: E0113 22:25:56.028285 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.028438 kubelet[3252]: W0113 22:25:56.028334 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.028438 kubelet[3252]: E0113 22:25:56.028400 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.037353 kubelet[3252]: E0113 22:25:56.037316 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.037353 kubelet[3252]: W0113 22:25:56.037349 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.037607 kubelet[3252]: E0113 22:25:56.037395 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.037804 kubelet[3252]: E0113 22:25:56.037772 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.037955 kubelet[3252]: W0113 22:25:56.037801 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.037955 kubelet[3252]: E0113 22:25:56.037842 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.120933 kubelet[3252]: E0113 22:25:56.120826 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.120933 kubelet[3252]: W0113 22:25:56.120849 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.120933 kubelet[3252]: E0113 22:25:56.120875 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.121172 kubelet[3252]: E0113 22:25:56.121159 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.121217 kubelet[3252]: W0113 22:25:56.121173 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.121217 kubelet[3252]: E0113 22:25:56.121190 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.121392 kubelet[3252]: E0113 22:25:56.121381 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.121392 kubelet[3252]: W0113 22:25:56.121390 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.121471 kubelet[3252]: E0113 22:25:56.121404 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.121605 kubelet[3252]: E0113 22:25:56.121594 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.121652 kubelet[3252]: W0113 22:25:56.121607 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.121652 kubelet[3252]: E0113 22:25:56.121629 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.121863 kubelet[3252]: E0113 22:25:56.121851 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.121863 kubelet[3252]: W0113 22:25:56.121862 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.121941 kubelet[3252]: E0113 22:25:56.121877 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.122079 kubelet[3252]: E0113 22:25:56.122068 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.122118 kubelet[3252]: W0113 22:25:56.122081 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.122118 kubelet[3252]: E0113 22:25:56.122103 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.122318 kubelet[3252]: E0113 22:25:56.122305 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.122358 kubelet[3252]: W0113 22:25:56.122319 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.122358 kubelet[3252]: E0113 22:25:56.122348 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.122516 kubelet[3252]: E0113 22:25:56.122504 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.122562 kubelet[3252]: W0113 22:25:56.122518 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.122595 kubelet[3252]: E0113 22:25:56.122569 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.122708 kubelet[3252]: E0113 22:25:56.122694 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.122708 kubelet[3252]: W0113 22:25:56.122703 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.122786 kubelet[3252]: E0113 22:25:56.122721 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.122860 kubelet[3252]: E0113 22:25:56.122850 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.122860 kubelet[3252]: W0113 22:25:56.122859 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.122938 kubelet[3252]: E0113 22:25:56.122882 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.123039 kubelet[3252]: E0113 22:25:56.123028 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.123082 kubelet[3252]: W0113 22:25:56.123041 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.123082 kubelet[3252]: E0113 22:25:56.123064 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.123236 kubelet[3252]: E0113 22:25:56.123227 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.123272 kubelet[3252]: W0113 22:25:56.123237 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.123272 kubelet[3252]: E0113 22:25:56.123252 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.123439 kubelet[3252]: E0113 22:25:56.123429 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.123439 kubelet[3252]: W0113 22:25:56.123438 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.123519 kubelet[3252]: E0113 22:25:56.123461 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.123690 kubelet[3252]: E0113 22:25:56.123679 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.123725 kubelet[3252]: W0113 22:25:56.123691 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.123725 kubelet[3252]: E0113 22:25:56.123711 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.123869 kubelet[3252]: E0113 22:25:56.123859 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.123904 kubelet[3252]: W0113 22:25:56.123871 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.123904 kubelet[3252]: E0113 22:25:56.123892 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.124077 kubelet[3252]: E0113 22:25:56.124041 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.124077 kubelet[3252]: W0113 22:25:56.124050 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.124077 kubelet[3252]: E0113 22:25:56.124072 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.124198 kubelet[3252]: E0113 22:25:56.124189 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.124198 kubelet[3252]: W0113 22:25:56.124197 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.124262 kubelet[3252]: E0113 22:25:56.124223 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.124339 kubelet[3252]: E0113 22:25:56.124331 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.124374 kubelet[3252]: W0113 22:25:56.124339 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.124374 kubelet[3252]: E0113 22:25:56.124358 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.124499 kubelet[3252]: E0113 22:25:56.124490 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.124499 kubelet[3252]: W0113 22:25:56.124498 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.124569 kubelet[3252]: E0113 22:25:56.124511 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.124665 kubelet[3252]: E0113 22:25:56.124657 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.124700 kubelet[3252]: W0113 22:25:56.124665 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.124700 kubelet[3252]: E0113 22:25:56.124678 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.124855 kubelet[3252]: E0113 22:25:56.124820 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.124855 kubelet[3252]: W0113 22:25:56.124828 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.124855 kubelet[3252]: E0113 22:25:56.124840 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.124971 kubelet[3252]: E0113 22:25:56.124963 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.125003 kubelet[3252]: W0113 22:25:56.124971 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.125003 kubelet[3252]: E0113 22:25:56.124983 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.125254 kubelet[3252]: E0113 22:25:56.125242 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.125289 kubelet[3252]: W0113 22:25:56.125254 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.125289 kubelet[3252]: E0113 22:25:56.125271 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.125440 kubelet[3252]: E0113 22:25:56.125431 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.125476 kubelet[3252]: W0113 22:25:56.125440 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.125476 kubelet[3252]: E0113 22:25:56.125465 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.125743 kubelet[3252]: E0113 22:25:56.125732 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.125775 kubelet[3252]: W0113 22:25:56.125746 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.125775 kubelet[3252]: E0113 22:25:56.125759 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.132130 kubelet[3252]: E0113 22:25:56.132110 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:25:56.132130 kubelet[3252]: W0113 22:25:56.132123 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:25:56.132310 kubelet[3252]: E0113 22:25:56.132144 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:25:56.156564 containerd[1823]: time="2025-01-13T22:25:56.156487494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c444d79d-98dd2,Uid:de620a6a-320f-462d-81ae-ec3d1655ce72,Namespace:calico-system,Attempt:0,}" Jan 13 22:25:56.167337 containerd[1823]: time="2025-01-13T22:25:56.167106241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:25:56.167337 containerd[1823]: time="2025-01-13T22:25:56.167328865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:25:56.167337 containerd[1823]: time="2025-01-13T22:25:56.167336749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:56.167442 containerd[1823]: time="2025-01-13T22:25:56.167379769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:56.178279 containerd[1823]: time="2025-01-13T22:25:56.178256703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kcns,Uid:b0a2557c-1652-4a66-9385-3d0f47795ea0,Namespace:calico-system,Attempt:0,}" Jan 13 22:25:56.186580 systemd[1]: Started cri-containerd-96473377f4b6803d887b258466ddf73b9e2e19e3cbdea157ea03dc74969e4026.scope - libcontainer container 96473377f4b6803d887b258466ddf73b9e2e19e3cbdea157ea03dc74969e4026. Jan 13 22:25:56.187404 containerd[1823]: time="2025-01-13T22:25:56.187367498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:25:56.187404 containerd[1823]: time="2025-01-13T22:25:56.187396262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:25:56.187496 containerd[1823]: time="2025-01-13T22:25:56.187407433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:56.187496 containerd[1823]: time="2025-01-13T22:25:56.187465225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:25:56.193015 systemd[1]: Started cri-containerd-47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93.scope - libcontainer container 47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93. Jan 13 22:25:56.202549 containerd[1823]: time="2025-01-13T22:25:56.202494000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kcns,Uid:b0a2557c-1652-4a66-9385-3d0f47795ea0,Namespace:calico-system,Attempt:0,} returns sandbox id \"47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93\"" Jan 13 22:25:56.203210 containerd[1823]: time="2025-01-13T22:25:56.203195039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 22:25:56.208171 containerd[1823]: time="2025-01-13T22:25:56.208150826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c444d79d-98dd2,Uid:de620a6a-320f-462d-81ae-ec3d1655ce72,Namespace:calico-system,Attempt:0,} returns sandbox id \"96473377f4b6803d887b258466ddf73b9e2e19e3cbdea157ea03dc74969e4026\"" Jan 13 22:25:57.089289 kubelet[3252]: E0113 22:25:57.089252 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zncfw" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" Jan 13 22:25:57.712190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258081620.mount: Deactivated successfully. Jan 13 22:25:57.778024 containerd[1823]: time="2025-01-13T22:25:57.777974186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:57.778217 containerd[1823]: time="2025-01-13T22:25:57.778184623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 22:25:57.778473 containerd[1823]: time="2025-01-13T22:25:57.778408498Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:57.779492 containerd[1823]: time="2025-01-13T22:25:57.779421964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:57.779941 containerd[1823]: time="2025-01-13T22:25:57.779898162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.576685023s" Jan 13 22:25:57.780067 containerd[1823]: time="2025-01-13T22:25:57.779953720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 22:25:57.780595 containerd[1823]: time="2025-01-13T22:25:57.780574836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 22:25:57.781557 containerd[1823]: time="2025-01-13T22:25:57.781540992Z" level=info msg="CreateContainer within sandbox \"47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 22:25:57.786459 containerd[1823]: time="2025-01-13T22:25:57.786438498Z" level=info msg="CreateContainer within sandbox \"47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103\"" Jan 13 22:25:57.786833 containerd[1823]: time="2025-01-13T22:25:57.786801720Z" level=info msg="StartContainer for \"46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103\"" Jan 13 22:25:57.804828 systemd[1]: Started cri-containerd-46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103.scope - libcontainer container 46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103. Jan 13 22:25:57.818694 containerd[1823]: time="2025-01-13T22:25:57.818666221Z" level=info msg="StartContainer for \"46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103\" returns successfully" Jan 13 22:25:57.825796 systemd[1]: cri-containerd-46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103.scope: Deactivated successfully. Jan 13 22:25:58.029981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103-rootfs.mount: Deactivated successfully. Jan 13 22:25:58.075177 containerd[1823]: time="2025-01-13T22:25:58.075144415Z" level=info msg="shim disconnected" id=46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103 namespace=k8s.io Jan 13 22:25:58.075177 containerd[1823]: time="2025-01-13T22:25:58.075176742Z" level=warning msg="cleaning up after shim disconnected" id=46baa26cde10125bfebb9fe540ea2caf5536533bbd9972d7fbad93489518a103 namespace=k8s.io Jan 13 22:25:58.075285 containerd[1823]: time="2025-01-13T22:25:58.075181903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:25:59.088519 kubelet[3252]: E0113 22:25:59.088472 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zncfw" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" Jan 13 22:25:59.966528 containerd[1823]: time="2025-01-13T22:25:59.966474995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:59.966756 containerd[1823]: time="2025-01-13T22:25:59.966724633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 22:25:59.967059 containerd[1823]: time="2025-01-13T22:25:59.967044728Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:59.968150 containerd[1823]: time="2025-01-13T22:25:59.968112078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:25:59.968380 containerd[1823]: time="2025-01-13T22:25:59.968368842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.187711124s" Jan 13 22:25:59.968401 containerd[1823]: time="2025-01-13T22:25:59.968384159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 22:25:59.968675 containerd[1823]: time="2025-01-13T22:25:59.968661975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 22:25:59.972664 containerd[1823]: time="2025-01-13T22:25:59.972643625Z" level=info msg="CreateContainer within sandbox \"96473377f4b6803d887b258466ddf73b9e2e19e3cbdea157ea03dc74969e4026\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 22:25:59.976873 containerd[1823]: time="2025-01-13T22:25:59.976830528Z" level=info msg="CreateContainer within sandbox \"96473377f4b6803d887b258466ddf73b9e2e19e3cbdea157ea03dc74969e4026\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"766f611b55355bfcee24fba36214d57e7c4401b525864cce98bcf746ca3a9866\"" Jan 13 22:25:59.977100 containerd[1823]: time="2025-01-13T22:25:59.977061601Z" level=info msg="StartContainer for \"766f611b55355bfcee24fba36214d57e7c4401b525864cce98bcf746ca3a9866\"" Jan 13 22:26:00.002599 systemd[1]: Started cri-containerd-766f611b55355bfcee24fba36214d57e7c4401b525864cce98bcf746ca3a9866.scope - libcontainer container 766f611b55355bfcee24fba36214d57e7c4401b525864cce98bcf746ca3a9866. Jan 13 22:26:00.030454 containerd[1823]: time="2025-01-13T22:26:00.030425713Z" level=info msg="StartContainer for \"766f611b55355bfcee24fba36214d57e7c4401b525864cce98bcf746ca3a9866\" returns successfully" Jan 13 22:26:00.179869 kubelet[3252]: I0113 22:26:00.179800 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-58c444d79d-98dd2" podStartSLOduration=1.41973746 podStartE2EDuration="5.179690262s" podCreationTimestamp="2025-01-13 22:25:55 +0000 UTC" firstStartedPulling="2025-01-13 22:25:56.208607562 +0000 UTC m=+20.163082045" lastFinishedPulling="2025-01-13 22:25:59.968560364 +0000 UTC m=+23.923034847" observedRunningTime="2025-01-13 22:26:00.178854157 +0000 UTC m=+24.133328699" watchObservedRunningTime="2025-01-13 22:26:00.179690262 +0000 UTC m=+24.134164786" Jan 13 22:26:01.088815 kubelet[3252]: E0113 22:26:01.088758 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zncfw" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" Jan 13 22:26:01.158835 kubelet[3252]: I0113 22:26:01.158785 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:26:03.088778 kubelet[3252]: E0113 22:26:03.088679 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zncfw" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" Jan 13 22:26:03.824963 containerd[1823]: time="2025-01-13T22:26:03.824940373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:03.825251 containerd[1823]: time="2025-01-13T22:26:03.825161148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 22:26:03.825542 containerd[1823]: time="2025-01-13T22:26:03.825528384Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:03.826539 containerd[1823]: time="2025-01-13T22:26:03.826507717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:03.827274 containerd[1823]: time="2025-01-13T22:26:03.827257469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.858578651s" Jan 13 22:26:03.827307 containerd[1823]: time="2025-01-13T22:26:03.827276277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 22:26:03.828117 containerd[1823]: time="2025-01-13T22:26:03.828080205Z" level=info msg="CreateContainer within sandbox \"47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 22:26:03.833590 containerd[1823]: time="2025-01-13T22:26:03.833547326Z" level=info msg="CreateContainer within sandbox \"47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81\"" Jan 13 22:26:03.833825 containerd[1823]: time="2025-01-13T22:26:03.833783371Z" level=info msg="StartContainer for \"b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81\"" Jan 13 22:26:03.858747 systemd[1]: Started cri-containerd-b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81.scope - libcontainer container b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81. Jan 13 22:26:03.872908 containerd[1823]: time="2025-01-13T22:26:03.872881401Z" level=info msg="StartContainer for \"b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81\" returns successfully" Jan 13 22:26:04.415833 systemd[1]: cri-containerd-b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81.scope: Deactivated successfully. Jan 13 22:26:04.426248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81-rootfs.mount: Deactivated successfully. Jan 13 22:26:04.505753 kubelet[3252]: I0113 22:26:04.505702 3252 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 22:26:04.521772 kubelet[3252]: I0113 22:26:04.521750 3252 topology_manager.go:215] "Topology Admit Handler" podUID="207cec62-0844-4486-bfd5-d28a764a464c" podNamespace="kube-system" podName="coredns-76f75df574-7jn9j" Jan 13 22:26:04.522235 kubelet[3252]: I0113 22:26:04.522221 3252 topology_manager.go:215] "Topology Admit Handler" podUID="fddf42a2-d398-43f9-a55a-0d2b23c7af2d" podNamespace="kube-system" podName="coredns-76f75df574-789dw" Jan 13 22:26:04.522472 kubelet[3252]: I0113 22:26:04.522463 3252 topology_manager.go:215] "Topology Admit Handler" podUID="61f640c6-316d-4aef-a0dc-40ac5d65e36f" podNamespace="calico-apiserver" podName="calico-apiserver-8cc7c86dd-gr6v6" Jan 13 22:26:04.522649 kubelet[3252]: I0113 22:26:04.522638 3252 topology_manager.go:215] "Topology Admit Handler" podUID="0e56f5ab-a1b4-4441-b350-aad158e7cc51" podNamespace="calico-system" podName="calico-kube-controllers-58696dffd4-626g8" Jan 13 22:26:04.522830 kubelet[3252]: I0113 22:26:04.522815 3252 topology_manager.go:215] "Topology Admit Handler" podUID="f6346afb-6034-4ffe-a552-8d2d3bc5a71c" podNamespace="calico-apiserver" podName="calico-apiserver-8cc7c86dd-xfvlc" Jan 13 22:26:04.525406 systemd[1]: Created slice kubepods-burstable-pod207cec62_0844_4486_bfd5_d28a764a464c.slice - libcontainer container kubepods-burstable-pod207cec62_0844_4486_bfd5_d28a764a464c.slice. Jan 13 22:26:04.528490 systemd[1]: Created slice kubepods-burstable-podfddf42a2_d398_43f9_a55a_0d2b23c7af2d.slice - libcontainer container kubepods-burstable-podfddf42a2_d398_43f9_a55a_0d2b23c7af2d.slice. Jan 13 22:26:04.531112 systemd[1]: Created slice kubepods-besteffort-pod61f640c6_316d_4aef_a0dc_40ac5d65e36f.slice - libcontainer container kubepods-besteffort-pod61f640c6_316d_4aef_a0dc_40ac5d65e36f.slice. Jan 13 22:26:04.533638 systemd[1]: Created slice kubepods-besteffort-pod0e56f5ab_a1b4_4441_b350_aad158e7cc51.slice - libcontainer container kubepods-besteffort-pod0e56f5ab_a1b4_4441_b350_aad158e7cc51.slice. Jan 13 22:26:04.535714 systemd[1]: Created slice kubepods-besteffort-podf6346afb_6034_4ffe_a552_8d2d3bc5a71c.slice - libcontainer container kubepods-besteffort-podf6346afb_6034_4ffe_a552_8d2d3bc5a71c.slice. Jan 13 22:26:04.583179 kubelet[3252]: I0113 22:26:04.583134 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqncg\" (UniqueName: \"kubernetes.io/projected/207cec62-0844-4486-bfd5-d28a764a464c-kube-api-access-tqncg\") pod \"coredns-76f75df574-7jn9j\" (UID: \"207cec62-0844-4486-bfd5-d28a764a464c\") " pod="kube-system/coredns-76f75df574-7jn9j" Jan 13 22:26:04.583179 kubelet[3252]: I0113 22:26:04.583159 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e56f5ab-a1b4-4441-b350-aad158e7cc51-tigera-ca-bundle\") pod \"calico-kube-controllers-58696dffd4-626g8\" (UID: \"0e56f5ab-a1b4-4441-b350-aad158e7cc51\") " pod="calico-system/calico-kube-controllers-58696dffd4-626g8" Jan 13 22:26:04.583179 kubelet[3252]: I0113 22:26:04.583175 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262w5\" (UniqueName: \"kubernetes.io/projected/0e56f5ab-a1b4-4441-b350-aad158e7cc51-kube-api-access-262w5\") pod \"calico-kube-controllers-58696dffd4-626g8\" (UID: \"0e56f5ab-a1b4-4441-b350-aad158e7cc51\") " pod="calico-system/calico-kube-controllers-58696dffd4-626g8" Jan 13 22:26:04.583311 kubelet[3252]: I0113 22:26:04.583191 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mzfr\" (UniqueName: \"kubernetes.io/projected/f6346afb-6034-4ffe-a552-8d2d3bc5a71c-kube-api-access-9mzfr\") pod \"calico-apiserver-8cc7c86dd-xfvlc\" (UID: \"f6346afb-6034-4ffe-a552-8d2d3bc5a71c\") " pod="calico-apiserver/calico-apiserver-8cc7c86dd-xfvlc" Jan 13 22:26:04.583311 kubelet[3252]: I0113 22:26:04.583222 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/207cec62-0844-4486-bfd5-d28a764a464c-config-volume\") pod \"coredns-76f75df574-7jn9j\" (UID: \"207cec62-0844-4486-bfd5-d28a764a464c\") " pod="kube-system/coredns-76f75df574-7jn9j" Jan 13 22:26:04.583311 kubelet[3252]: I0113 22:26:04.583286 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6346afb-6034-4ffe-a552-8d2d3bc5a71c-calico-apiserver-certs\") pod \"calico-apiserver-8cc7c86dd-xfvlc\" (UID: \"f6346afb-6034-4ffe-a552-8d2d3bc5a71c\") " pod="calico-apiserver/calico-apiserver-8cc7c86dd-xfvlc" Jan 13 22:26:04.583311 kubelet[3252]: I0113 22:26:04.583307 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61f640c6-316d-4aef-a0dc-40ac5d65e36f-calico-apiserver-certs\") pod \"calico-apiserver-8cc7c86dd-gr6v6\" (UID: \"61f640c6-316d-4aef-a0dc-40ac5d65e36f\") " pod="calico-apiserver/calico-apiserver-8cc7c86dd-gr6v6" Jan 13 22:26:04.583384 kubelet[3252]: I0113 22:26:04.583336 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fddf42a2-d398-43f9-a55a-0d2b23c7af2d-config-volume\") pod \"coredns-76f75df574-789dw\" (UID: \"fddf42a2-d398-43f9-a55a-0d2b23c7af2d\") " pod="kube-system/coredns-76f75df574-789dw" Jan 13 22:26:04.583384 kubelet[3252]: I0113 22:26:04.583352 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2z4\" (UniqueName: \"kubernetes.io/projected/fddf42a2-d398-43f9-a55a-0d2b23c7af2d-kube-api-access-zg2z4\") pod \"coredns-76f75df574-789dw\" (UID: \"fddf42a2-d398-43f9-a55a-0d2b23c7af2d\") " pod="kube-system/coredns-76f75df574-789dw" Jan 13 22:26:04.583384 kubelet[3252]: I0113 22:26:04.583367 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rlpz\" (UniqueName: \"kubernetes.io/projected/61f640c6-316d-4aef-a0dc-40ac5d65e36f-kube-api-access-6rlpz\") pod \"calico-apiserver-8cc7c86dd-gr6v6\" (UID: \"61f640c6-316d-4aef-a0dc-40ac5d65e36f\") " pod="calico-apiserver/calico-apiserver-8cc7c86dd-gr6v6" Jan 13 22:26:04.828371 containerd[1823]: time="2025-01-13T22:26:04.828146537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7jn9j,Uid:207cec62-0844-4486-bfd5-d28a764a464c,Namespace:kube-system,Attempt:0,}" Jan 13 22:26:04.831610 containerd[1823]: time="2025-01-13T22:26:04.831533985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-789dw,Uid:fddf42a2-d398-43f9-a55a-0d2b23c7af2d,Namespace:kube-system,Attempt:0,}" Jan 13 22:26:04.833834 containerd[1823]: time="2025-01-13T22:26:04.833758213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-gr6v6,Uid:61f640c6-316d-4aef-a0dc-40ac5d65e36f,Namespace:calico-apiserver,Attempt:0,}" Jan 13 22:26:04.835928 containerd[1823]: time="2025-01-13T22:26:04.835858120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58696dffd4-626g8,Uid:0e56f5ab-a1b4-4441-b350-aad158e7cc51,Namespace:calico-system,Attempt:0,}" Jan 13 22:26:04.837187 containerd[1823]: time="2025-01-13T22:26:04.837139813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-xfvlc,Uid:f6346afb-6034-4ffe-a552-8d2d3bc5a71c,Namespace:calico-apiserver,Attempt:0,}" Jan 13 22:26:05.078916 containerd[1823]: time="2025-01-13T22:26:05.078816341Z" level=info msg="shim disconnected" id=b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81 namespace=k8s.io Jan 13 22:26:05.078916 containerd[1823]: time="2025-01-13T22:26:05.078847154Z" level=warning msg="cleaning up after shim disconnected" id=b6578fe4155c1224137149ab8626e50cc50a554cd53cf95f9dc1a7e4e01a4c81 namespace=k8s.io Jan 13 22:26:05.078916 containerd[1823]: time="2025-01-13T22:26:05.078853036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:26:05.093786 systemd[1]: Created slice kubepods-besteffort-pod95289ad8_9d9d_491a_a4ad_267a7ba7fe8b.slice - libcontainer container kubepods-besteffort-pod95289ad8_9d9d_491a_a4ad_267a7ba7fe8b.slice. Jan 13 22:26:05.104745 containerd[1823]: time="2025-01-13T22:26:05.104691147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zncfw,Uid:95289ad8-9d9d-491a-a4ad-267a7ba7fe8b,Namespace:calico-system,Attempt:0,}" Jan 13 22:26:05.131012 containerd[1823]: time="2025-01-13T22:26:05.130959405Z" level=error msg="Failed to destroy network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.131244 containerd[1823]: time="2025-01-13T22:26:05.131224154Z" level=error msg="encountered an error cleaning up failed sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.131294 containerd[1823]: time="2025-01-13T22:26:05.131277254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7jn9j,Uid:207cec62-0844-4486-bfd5-d28a764a464c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.131598 kubelet[3252]: E0113 22:26:05.131575 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.131693 kubelet[3252]: E0113 22:26:05.131627 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7jn9j" Jan 13 22:26:05.131693 kubelet[3252]: E0113 22:26:05.131648 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7jn9j" Jan 13 22:26:05.131758 kubelet[3252]: E0113 22:26:05.131704 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7jn9j_kube-system(207cec62-0844-4486-bfd5-d28a764a464c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7jn9j_kube-system(207cec62-0844-4486-bfd5-d28a764a464c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7jn9j" podUID="207cec62-0844-4486-bfd5-d28a764a464c" Jan 13 22:26:05.144067 containerd[1823]: time="2025-01-13T22:26:05.144038201Z" level=error msg="Failed to destroy network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144241 containerd[1823]: time="2025-01-13T22:26:05.144229083Z" level=error msg="encountered an error cleaning up failed sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144272 containerd[1823]: time="2025-01-13T22:26:05.144263390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-xfvlc,Uid:f6346afb-6034-4ffe-a552-8d2d3bc5a71c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144422 kubelet[3252]: E0113 22:26:05.144406 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144484 kubelet[3252]: E0113 22:26:05.144449 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cc7c86dd-xfvlc" Jan 13 22:26:05.144484 kubelet[3252]: E0113 22:26:05.144465 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cc7c86dd-xfvlc" Jan 13 22:26:05.144529 kubelet[3252]: E0113 22:26:05.144500 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cc7c86dd-xfvlc_calico-apiserver(f6346afb-6034-4ffe-a552-8d2d3bc5a71c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cc7c86dd-xfvlc_calico-apiserver(f6346afb-6034-4ffe-a552-8d2d3bc5a71c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cc7c86dd-xfvlc" podUID="f6346afb-6034-4ffe-a552-8d2d3bc5a71c" Jan 13 22:26:05.144569 containerd[1823]: time="2025-01-13T22:26:05.144548929Z" level=error msg="Failed to destroy network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144704 containerd[1823]: time="2025-01-13T22:26:05.144691207Z" level=error msg="encountered an error cleaning up failed sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144733 containerd[1823]: time="2025-01-13T22:26:05.144713684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-gr6v6,Uid:61f640c6-316d-4aef-a0dc-40ac5d65e36f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144807 kubelet[3252]: E0113 22:26:05.144795 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.144910 kubelet[3252]: E0113 22:26:05.144824 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cc7c86dd-gr6v6" Jan 13 22:26:05.144910 kubelet[3252]: E0113 22:26:05.144842 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cc7c86dd-gr6v6" Jan 13 22:26:05.144910 kubelet[3252]: E0113 22:26:05.144874 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cc7c86dd-gr6v6_calico-apiserver(61f640c6-316d-4aef-a0dc-40ac5d65e36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cc7c86dd-gr6v6_calico-apiserver(61f640c6-316d-4aef-a0dc-40ac5d65e36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cc7c86dd-gr6v6" podUID="61f640c6-316d-4aef-a0dc-40ac5d65e36f" Jan 13 22:26:05.145199 containerd[1823]: time="2025-01-13T22:26:05.145181339Z" level=error msg="Failed to destroy network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145330 containerd[1823]: time="2025-01-13T22:26:05.145317827Z" level=error msg="encountered an error cleaning up failed sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145368 containerd[1823]: time="2025-01-13T22:26:05.145339988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58696dffd4-626g8,Uid:0e56f5ab-a1b4-4441-b350-aad158e7cc51,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145421 containerd[1823]: time="2025-01-13T22:26:05.145368380Z" level=error msg="Failed to destroy network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145461 kubelet[3252]: E0113 22:26:05.145434 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145502 kubelet[3252]: E0113 22:26:05.145467 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58696dffd4-626g8" Jan 13 22:26:05.145502 kubelet[3252]: E0113 22:26:05.145481 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58696dffd4-626g8" Jan 13 22:26:05.145545 kubelet[3252]: E0113 22:26:05.145506 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58696dffd4-626g8_calico-system(0e56f5ab-a1b4-4441-b350-aad158e7cc51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58696dffd4-626g8_calico-system(0e56f5ab-a1b4-4441-b350-aad158e7cc51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58696dffd4-626g8" podUID="0e56f5ab-a1b4-4441-b350-aad158e7cc51" Jan 13 22:26:05.145579 containerd[1823]: time="2025-01-13T22:26:05.145545148Z" level=error msg="Failed to destroy network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145666 containerd[1823]: time="2025-01-13T22:26:05.145648677Z" level=error msg="encountered an error cleaning up failed sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145697 containerd[1823]: time="2025-01-13T22:26:05.145681222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-789dw,Uid:fddf42a2-d398-43f9-a55a-0d2b23c7af2d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145731 containerd[1823]: time="2025-01-13T22:26:05.145716203Z" level=error msg="encountered an error cleaning up failed sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145754 containerd[1823]: time="2025-01-13T22:26:05.145744584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zncfw,Uid:95289ad8-9d9d-491a-a4ad-267a7ba7fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145781 kubelet[3252]: E0113 22:26:05.145773 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145800 kubelet[3252]: E0113 22:26:05.145794 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-789dw" Jan 13 22:26:05.145822 kubelet[3252]: E0113 22:26:05.145804 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.145822 kubelet[3252]: E0113 22:26:05.145808 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-789dw" Jan 13 22:26:05.145822 kubelet[3252]: E0113 22:26:05.145818 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zncfw" Jan 13 22:26:05.145875 kubelet[3252]: E0113 22:26:05.145835 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-789dw_kube-system(fddf42a2-d398-43f9-a55a-0d2b23c7af2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-789dw_kube-system(fddf42a2-d398-43f9-a55a-0d2b23c7af2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-789dw" podUID="fddf42a2-d398-43f9-a55a-0d2b23c7af2d" Jan 13 22:26:05.145875 kubelet[3252]: E0113 22:26:05.145841 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zncfw" Jan 13 22:26:05.145875 kubelet[3252]: E0113 22:26:05.145864 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zncfw_calico-system(95289ad8-9d9d-491a-a4ad-267a7ba7fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zncfw_calico-system(95289ad8-9d9d-491a-a4ad-267a7ba7fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zncfw" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" Jan 13 22:26:05.169267 kubelet[3252]: I0113 22:26:05.169229 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:05.169622 containerd[1823]: time="2025-01-13T22:26:05.169607938Z" level=info msg="StopPodSandbox for \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\"" Jan 13 22:26:05.169663 kubelet[3252]: I0113 22:26:05.169639 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:05.169734 containerd[1823]: time="2025-01-13T22:26:05.169722008Z" level=info msg="Ensure that sandbox 22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a in task-service has been cleanup successfully" Jan 13 22:26:05.169876 containerd[1823]: time="2025-01-13T22:26:05.169865212Z" level=info msg="StopPodSandbox for \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\"" Jan 13 22:26:05.169972 containerd[1823]: time="2025-01-13T22:26:05.169962262Z" level=info msg="Ensure that sandbox d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f in task-service has been cleanup successfully" Jan 13 22:26:05.170126 kubelet[3252]: I0113 22:26:05.170119 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:05.170331 containerd[1823]: time="2025-01-13T22:26:05.170317244Z" level=info msg="StopPodSandbox for \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\"" Jan 13 22:26:05.170409 containerd[1823]: time="2025-01-13T22:26:05.170400132Z" level=info msg="Ensure that sandbox 454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64 in task-service has been cleanup successfully" Jan 13 22:26:05.170522 kubelet[3252]: I0113 22:26:05.170514 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:05.170742 containerd[1823]: time="2025-01-13T22:26:05.170730916Z" level=info msg="StopPodSandbox for \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\"" Jan 13 22:26:05.170830 containerd[1823]: time="2025-01-13T22:26:05.170817170Z" level=info msg="Ensure that sandbox a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621 in task-service has been cleanup successfully" Jan 13 22:26:05.172185 kubelet[3252]: I0113 22:26:05.172157 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:05.172370 containerd[1823]: time="2025-01-13T22:26:05.172346147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 22:26:05.172537 containerd[1823]: time="2025-01-13T22:26:05.172517714Z" level=info msg="StopPodSandbox for \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\"" Jan 13 22:26:05.172671 containerd[1823]: time="2025-01-13T22:26:05.172656929Z" level=info msg="Ensure that sandbox 87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f in task-service has been cleanup successfully" Jan 13 22:26:05.172905 kubelet[3252]: I0113 22:26:05.172883 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:05.173438 containerd[1823]: time="2025-01-13T22:26:05.173412553Z" level=info msg="StopPodSandbox for \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\"" Jan 13 22:26:05.173567 containerd[1823]: time="2025-01-13T22:26:05.173556600Z" level=info msg="Ensure that sandbox 0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c in task-service has been cleanup successfully" Jan 13 22:26:05.185883 containerd[1823]: time="2025-01-13T22:26:05.185811023Z" level=error msg="StopPodSandbox for \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\" failed" error="failed to destroy network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.186070 kubelet[3252]: E0113 22:26:05.185983 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:05.186070 kubelet[3252]: E0113 22:26:05.186045 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f"} Jan 13 22:26:05.186165 kubelet[3252]: E0113 22:26:05.186077 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e56f5ab-a1b4-4441-b350-aad158e7cc51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:26:05.186165 kubelet[3252]: E0113 22:26:05.186098 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e56f5ab-a1b4-4441-b350-aad158e7cc51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58696dffd4-626g8" podUID="0e56f5ab-a1b4-4441-b350-aad158e7cc51" Jan 13 22:26:05.186569 containerd[1823]: time="2025-01-13T22:26:05.186553495Z" level=error msg="StopPodSandbox for \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\" failed" error="failed to destroy network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.186681 kubelet[3252]: E0113 22:26:05.186673 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:05.186706 kubelet[3252]: E0113 22:26:05.186691 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a"} Jan 13 22:26:05.186729 kubelet[3252]: E0113 22:26:05.186722 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"207cec62-0844-4486-bfd5-d28a764a464c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:26:05.186770 kubelet[3252]: E0113 22:26:05.186744 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"207cec62-0844-4486-bfd5-d28a764a464c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7jn9j" podUID="207cec62-0844-4486-bfd5-d28a764a464c" Jan 13 22:26:05.187326 containerd[1823]: time="2025-01-13T22:26:05.187309222Z" level=error msg="StopPodSandbox for \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\" failed" error="failed to destroy network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.187398 kubelet[3252]: E0113 22:26:05.187390 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:05.187425 kubelet[3252]: E0113 22:26:05.187406 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64"} Jan 13 22:26:05.187425 kubelet[3252]: E0113 22:26:05.187424 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fddf42a2-d398-43f9-a55a-0d2b23c7af2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:26:05.187538 kubelet[3252]: E0113 22:26:05.187439 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fddf42a2-d398-43f9-a55a-0d2b23c7af2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-789dw" podUID="fddf42a2-d398-43f9-a55a-0d2b23c7af2d" Jan 13 22:26:05.187588 containerd[1823]: time="2025-01-13T22:26:05.187556696Z" level=error msg="StopPodSandbox for \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\" failed" error="failed to destroy network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.187670 kubelet[3252]: E0113 22:26:05.187662 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:05.187698 kubelet[3252]: E0113 22:26:05.187674 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621"} Jan 13 22:26:05.187698 kubelet[3252]: E0113 22:26:05.187692 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6346afb-6034-4ffe-a552-8d2d3bc5a71c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:26:05.187747 kubelet[3252]: E0113 22:26:05.187705 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6346afb-6034-4ffe-a552-8d2d3bc5a71c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cc7c86dd-xfvlc" podUID="f6346afb-6034-4ffe-a552-8d2d3bc5a71c" Jan 13 22:26:05.190315 containerd[1823]: time="2025-01-13T22:26:05.190297221Z" level=error msg="StopPodSandbox for \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\" failed" error="failed to destroy network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.190379 containerd[1823]: time="2025-01-13T22:26:05.190363902Z" level=error msg="StopPodSandbox for \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\" failed" error="failed to destroy network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:26:05.190413 kubelet[3252]: E0113 22:26:05.190398 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:05.190465 kubelet[3252]: E0113 22:26:05.190423 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f"} Jan 13 22:26:05.190465 kubelet[3252]: E0113 22:26:05.190426 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:05.190465 kubelet[3252]: E0113 22:26:05.190440 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c"} Jan 13 22:26:05.190465 kubelet[3252]: E0113 22:26:05.190466 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61f640c6-316d-4aef-a0dc-40ac5d65e36f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:26:05.190576 kubelet[3252]: E0113 22:26:05.190478 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:26:05.190576 kubelet[3252]: E0113 22:26:05.190482 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61f640c6-316d-4aef-a0dc-40ac5d65e36f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cc7c86dd-gr6v6" podUID="61f640c6-316d-4aef-a0dc-40ac5d65e36f" Jan 13 22:26:05.190576 kubelet[3252]: E0113 22:26:05.190505 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zncfw" podUID="95289ad8-9d9d-491a-a4ad-267a7ba7fe8b" Jan 13 22:26:05.837150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621-shm.mount: Deactivated successfully. Jan 13 22:26:05.837211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f-shm.mount: Deactivated successfully. Jan 13 22:26:05.837261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c-shm.mount: Deactivated successfully. Jan 13 22:26:05.837312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64-shm.mount: Deactivated successfully. Jan 13 22:26:05.837365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a-shm.mount: Deactivated successfully. Jan 13 22:26:10.025430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225683762.mount: Deactivated successfully. Jan 13 22:26:10.048141 containerd[1823]: time="2025-01-13T22:26:10.048091356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:10.048324 containerd[1823]: time="2025-01-13T22:26:10.048241904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 22:26:10.048611 containerd[1823]: time="2025-01-13T22:26:10.048596773Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:10.049543 containerd[1823]: time="2025-01-13T22:26:10.049528171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:10.050214 containerd[1823]: time="2025-01-13T22:26:10.050200150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.877822428s" Jan 13 22:26:10.050259 containerd[1823]: time="2025-01-13T22:26:10.050216388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 22:26:10.053461 containerd[1823]: time="2025-01-13T22:26:10.053438327Z" level=info msg="CreateContainer within sandbox \"47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 22:26:10.060233 containerd[1823]: time="2025-01-13T22:26:10.060189312Z" level=info msg="CreateContainer within sandbox \"47e5136cd29f16202ad8ec95df610ffaa702371fee8bf97791b7d789f60e2c93\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2bfc17b9d41ac0837bdb60616bd0cd84abc1c8bfea60dcdaeaac8243caee71a\"" Jan 13 22:26:10.060380 containerd[1823]: time="2025-01-13T22:26:10.060369095Z" level=info msg="StartContainer for \"c2bfc17b9d41ac0837bdb60616bd0cd84abc1c8bfea60dcdaeaac8243caee71a\"" Jan 13 22:26:10.083625 systemd[1]: Started cri-containerd-c2bfc17b9d41ac0837bdb60616bd0cd84abc1c8bfea60dcdaeaac8243caee71a.scope - libcontainer container c2bfc17b9d41ac0837bdb60616bd0cd84abc1c8bfea60dcdaeaac8243caee71a. Jan 13 22:26:10.098200 containerd[1823]: time="2025-01-13T22:26:10.098172143Z" level=info msg="StartContainer for \"c2bfc17b9d41ac0837bdb60616bd0cd84abc1c8bfea60dcdaeaac8243caee71a\" returns successfully" Jan 13 22:26:10.153531 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 22:26:10.153587 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 22:26:10.198177 kubelet[3252]: I0113 22:26:10.198140 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-8kcns" podStartSLOduration=1.35079357 podStartE2EDuration="15.198105349s" podCreationTimestamp="2025-01-13 22:25:55 +0000 UTC" firstStartedPulling="2025-01-13 22:25:56.203060292 +0000 UTC m=+20.157534778" lastFinishedPulling="2025-01-13 22:26:10.050372074 +0000 UTC m=+34.004846557" observedRunningTime="2025-01-13 22:26:10.198018306 +0000 UTC m=+34.152492795" watchObservedRunningTime="2025-01-13 22:26:10.198105349 +0000 UTC m=+34.152579831" Jan 13 22:26:11.193067 kubelet[3252]: I0113 22:26:11.192982 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:26:12.253045 kubelet[3252]: I0113 22:26:12.252936 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:26:16.090626 containerd[1823]: time="2025-01-13T22:26:16.090507432Z" level=info msg="StopPodSandbox for \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\"" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.120 [INFO][4998] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.120 [INFO][4998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" iface="eth0" netns="/var/run/netns/cni-fd94e843-5d5f-9c78-0a58-1b421dc378f3" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.120 [INFO][4998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" iface="eth0" netns="/var/run/netns/cni-fd94e843-5d5f-9c78-0a58-1b421dc378f3" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.120 [INFO][4998] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" iface="eth0" netns="/var/run/netns/cni-fd94e843-5d5f-9c78-0a58-1b421dc378f3" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.120 [INFO][4998] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.120 [INFO][4998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.136 [INFO][5013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.136 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.136 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.141 [WARNING][5013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.141 [INFO][5013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.142 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:16.146733 containerd[1823]: 2025-01-13 22:26:16.145 [INFO][4998] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:16.147659 containerd[1823]: time="2025-01-13T22:26:16.146821100Z" level=info msg="TearDown network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\" successfully" Jan 13 22:26:16.147659 containerd[1823]: time="2025-01-13T22:26:16.146849636Z" level=info msg="StopPodSandbox for \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\" returns successfully" Jan 13 22:26:16.147659 containerd[1823]: time="2025-01-13T22:26:16.147552028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-gr6v6,Uid:61f640c6-316d-4aef-a0dc-40ac5d65e36f,Namespace:calico-apiserver,Attempt:1,}" Jan 13 22:26:16.149190 systemd[1]: run-netns-cni\x2dfd94e843\x2d5d5f\x2d9c78\x2d0a58\x2d1b421dc378f3.mount: Deactivated successfully. Jan 13 22:26:16.208092 systemd-networkd[1621]: cali606d63ad972: Link UP Jan 13 22:26:16.208192 systemd-networkd[1621]: cali606d63ad972: Gained carrier Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.163 [INFO][5029] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.170 [INFO][5029] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0 calico-apiserver-8cc7c86dd- calico-apiserver 61f640c6-316d-4aef-a0dc-40ac5d65e36f 759 0 2025-01-13 22:25:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cc7c86dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-8862dc3d2a calico-apiserver-8cc7c86dd-gr6v6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali606d63ad972 [] []}} ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.170 [INFO][5029] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.185 [INFO][5052] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" HandleID="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.191 [INFO][5052] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" HandleID="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029be70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-8862dc3d2a", "pod":"calico-apiserver-8cc7c86dd-gr6v6", "timestamp":"2025-01-13 22:26:16.185625472 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8862dc3d2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.191 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.191 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.191 [INFO][5052] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8862dc3d2a' Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.192 [INFO][5052] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.194 [INFO][5052] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.196 [INFO][5052] ipam/ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.197 [INFO][5052] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.198 [INFO][5052] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.198 [INFO][5052] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.198 [INFO][5052] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500 Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.200 [INFO][5052] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.203 [INFO][5052] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.193/26] block=192.168.97.192/26 handle="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.203 [INFO][5052] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.193/26] handle="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.203 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:16.213585 containerd[1823]: 2025-01-13 22:26:16.203 [INFO][5052] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.193/26] IPv6=[] ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" HandleID="k8s-pod-network.f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.213980 containerd[1823]: 2025-01-13 22:26:16.204 [INFO][5029] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"61f640c6-316d-4aef-a0dc-40ac5d65e36f", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"", Pod:"calico-apiserver-8cc7c86dd-gr6v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali606d63ad972", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:16.213980 containerd[1823]: 2025-01-13 22:26:16.204 [INFO][5029] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.193/32] ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.213980 containerd[1823]: 2025-01-13 22:26:16.204 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali606d63ad972 ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.213980 containerd[1823]: 2025-01-13 22:26:16.208 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.213980 containerd[1823]: 2025-01-13 22:26:16.208 [INFO][5029] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"61f640c6-316d-4aef-a0dc-40ac5d65e36f", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500", Pod:"calico-apiserver-8cc7c86dd-gr6v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali606d63ad972", MAC:"86:66:0a:cb:22:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:16.213980 containerd[1823]: 2025-01-13 22:26:16.212 [INFO][5029] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-gr6v6" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:16.222585 containerd[1823]: time="2025-01-13T22:26:16.222529467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:26:16.222751 containerd[1823]: time="2025-01-13T22:26:16.222737769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:26:16.222772 containerd[1823]: time="2025-01-13T22:26:16.222748801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:16.222802 containerd[1823]: time="2025-01-13T22:26:16.222792522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:16.246701 systemd[1]: Started cri-containerd-f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500.scope - libcontainer container f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500. Jan 13 22:26:16.274107 containerd[1823]: time="2025-01-13T22:26:16.274061559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-gr6v6,Uid:61f640c6-316d-4aef-a0dc-40ac5d65e36f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500\"" Jan 13 22:26:16.274985 containerd[1823]: time="2025-01-13T22:26:16.274940140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 22:26:17.089726 containerd[1823]: time="2025-01-13T22:26:17.089673201Z" level=info msg="StopPodSandbox for \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\"" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.113 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.113 [INFO][5168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" iface="eth0" netns="/var/run/netns/cni-45a65c99-2844-3288-c06a-2163d6fa1bdd" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.113 [INFO][5168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" iface="eth0" netns="/var/run/netns/cni-45a65c99-2844-3288-c06a-2163d6fa1bdd" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.113 [INFO][5168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" iface="eth0" netns="/var/run/netns/cni-45a65c99-2844-3288-c06a-2163d6fa1bdd" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.113 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.113 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.151 [INFO][5182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.152 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.152 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.161 [WARNING][5182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.161 [INFO][5182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.164 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:17.168002 containerd[1823]: 2025-01-13 22:26:17.166 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:17.169439 containerd[1823]: time="2025-01-13T22:26:17.168272682Z" level=info msg="TearDown network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\" successfully" Jan 13 22:26:17.169439 containerd[1823]: time="2025-01-13T22:26:17.168337411Z" level=info msg="StopPodSandbox for \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\" returns successfully" Jan 13 22:26:17.169439 containerd[1823]: time="2025-01-13T22:26:17.169374216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58696dffd4-626g8,Uid:0e56f5ab-a1b4-4441-b350-aad158e7cc51,Namespace:calico-system,Attempt:1,}" Jan 13 22:26:17.171605 systemd[1]: run-netns-cni\x2d45a65c99\x2d2844\x2d3288\x2dc06a\x2d2163d6fa1bdd.mount: Deactivated successfully. Jan 13 22:26:17.227423 systemd-networkd[1621]: cali4c140b65f8c: Link UP Jan 13 22:26:17.227558 systemd-networkd[1621]: cali4c140b65f8c: Gained carrier Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.185 [INFO][5200] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.192 [INFO][5200] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0 calico-kube-controllers-58696dffd4- calico-system 0e56f5ab-a1b4-4441-b350-aad158e7cc51 766 0 2025-01-13 22:25:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58696dffd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-8862dc3d2a calico-kube-controllers-58696dffd4-626g8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c140b65f8c [] []}} ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.192 [INFO][5200] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.208 [INFO][5220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" HandleID="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.212 [INFO][5220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" HandleID="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000435880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-8862dc3d2a", "pod":"calico-kube-controllers-58696dffd4-626g8", "timestamp":"2025-01-13 22:26:17.208233464 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8862dc3d2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.212 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.212 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.212 [INFO][5220] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8862dc3d2a' Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.213 [INFO][5220] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.215 [INFO][5220] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.218 [INFO][5220] ipam/ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.219 [INFO][5220] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.220 [INFO][5220] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.220 [INFO][5220] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.221 [INFO][5220] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0 Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.223 [INFO][5220] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.225 [INFO][5220] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.194/26] block=192.168.97.192/26 handle="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.225 [INFO][5220] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.194/26] handle="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.225 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:17.232315 containerd[1823]: 2025-01-13 22:26:17.225 [INFO][5220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.194/26] IPv6=[] ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" HandleID="k8s-pod-network.f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.232736 containerd[1823]: 2025-01-13 22:26:17.226 [INFO][5200] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0", GenerateName:"calico-kube-controllers-58696dffd4-", Namespace:"calico-system", SelfLink:"", UID:"0e56f5ab-a1b4-4441-b350-aad158e7cc51", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58696dffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"", Pod:"calico-kube-controllers-58696dffd4-626g8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c140b65f8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:17.232736 containerd[1823]: 2025-01-13 22:26:17.226 [INFO][5200] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.194/32] ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.232736 containerd[1823]: 2025-01-13 22:26:17.226 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c140b65f8c ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.232736 containerd[1823]: 2025-01-13 22:26:17.227 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.232736 containerd[1823]: 2025-01-13 22:26:17.227 [INFO][5200] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0", GenerateName:"calico-kube-controllers-58696dffd4-", Namespace:"calico-system", SelfLink:"", UID:"0e56f5ab-a1b4-4441-b350-aad158e7cc51", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58696dffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0", Pod:"calico-kube-controllers-58696dffd4-626g8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c140b65f8c", MAC:"4a:c6:05:df:b0:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:17.232736 containerd[1823]: 2025-01-13 22:26:17.231 [INFO][5200] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0" Namespace="calico-system" Pod="calico-kube-controllers-58696dffd4-626g8" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:17.241601 containerd[1823]: time="2025-01-13T22:26:17.241558458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:26:17.241601 containerd[1823]: time="2025-01-13T22:26:17.241587748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:26:17.241601 containerd[1823]: time="2025-01-13T22:26:17.241598020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:17.241899 containerd[1823]: time="2025-01-13T22:26:17.241883725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:17.256763 systemd[1]: Started cri-containerd-f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0.scope - libcontainer container f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0. Jan 13 22:26:17.281381 containerd[1823]: time="2025-01-13T22:26:17.281356386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58696dffd4-626g8,Uid:0e56f5ab-a1b4-4441-b350-aad158e7cc51,Namespace:calico-system,Attempt:1,} returns sandbox id \"f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0\"" Jan 13 22:26:17.550644 systemd-networkd[1621]: cali606d63ad972: Gained IPv6LL Jan 13 22:26:18.090181 containerd[1823]: time="2025-01-13T22:26:18.090149900Z" level=info msg="StopPodSandbox for \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\"" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.118 [INFO][5341] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.118 [INFO][5341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" iface="eth0" netns="/var/run/netns/cni-618bd3c1-08f4-325b-1711-e566ef534a4e" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.118 [INFO][5341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" iface="eth0" netns="/var/run/netns/cni-618bd3c1-08f4-325b-1711-e566ef534a4e" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.118 [INFO][5341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" iface="eth0" netns="/var/run/netns/cni-618bd3c1-08f4-325b-1711-e566ef534a4e" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.118 [INFO][5341] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.118 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.129 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.129 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.129 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.133 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.133 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.134 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:18.135481 containerd[1823]: 2025-01-13 22:26:18.134 [INFO][5341] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:18.135843 containerd[1823]: time="2025-01-13T22:26:18.135535093Z" level=info msg="TearDown network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\" successfully" Jan 13 22:26:18.135843 containerd[1823]: time="2025-01-13T22:26:18.135553432Z" level=info msg="StopPodSandbox for \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\" returns successfully" Jan 13 22:26:18.136005 containerd[1823]: time="2025-01-13T22:26:18.135989727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7jn9j,Uid:207cec62-0844-4486-bfd5-d28a764a464c,Namespace:kube-system,Attempt:1,}" Jan 13 22:26:18.149202 systemd[1]: run-netns-cni\x2d618bd3c1\x2d08f4\x2d325b\x2d1711\x2de566ef534a4e.mount: Deactivated successfully. Jan 13 22:26:18.197714 systemd-networkd[1621]: cali244f8ceb57e: Link UP Jan 13 22:26:18.197885 systemd-networkd[1621]: cali244f8ceb57e: Gained carrier Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.152 [INFO][5366] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.159 [INFO][5366] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0 coredns-76f75df574- kube-system 207cec62-0844-4486-bfd5-d28a764a464c 775 0 2025-01-13 22:25:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-8862dc3d2a coredns-76f75df574-7jn9j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali244f8ceb57e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.159 [INFO][5366] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.173 [INFO][5386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" HandleID="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.179 [INFO][5386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" HandleID="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002290c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-8862dc3d2a", "pod":"coredns-76f75df574-7jn9j", "timestamp":"2025-01-13 22:26:18.173981607 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8862dc3d2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.179 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.179 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.179 [INFO][5386] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8862dc3d2a' Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.180 [INFO][5386] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.183 [INFO][5386] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.186 [INFO][5386] ipam/ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.187 [INFO][5386] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.189 [INFO][5386] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.189 [INFO][5386] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.190 [INFO][5386] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86 Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.192 [INFO][5386] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.195 [INFO][5386] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.195/26] block=192.168.97.192/26 handle="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.195 [INFO][5386] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.195/26] handle="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.195 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:18.203343 containerd[1823]: 2025-01-13 22:26:18.195 [INFO][5386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.195/26] IPv6=[] ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" HandleID="k8s-pod-network.1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.203996 containerd[1823]: 2025-01-13 22:26:18.196 [INFO][5366] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"207cec62-0844-4486-bfd5-d28a764a464c", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"", Pod:"coredns-76f75df574-7jn9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244f8ceb57e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:18.203996 containerd[1823]: 2025-01-13 22:26:18.196 [INFO][5366] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.195/32] ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.203996 containerd[1823]: 2025-01-13 22:26:18.196 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali244f8ceb57e ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.203996 containerd[1823]: 2025-01-13 22:26:18.197 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.203996 containerd[1823]: 2025-01-13 22:26:18.197 [INFO][5366] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"207cec62-0844-4486-bfd5-d28a764a464c", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86", Pod:"coredns-76f75df574-7jn9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244f8ceb57e", MAC:"a2:bd:2a:8f:a4:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:18.203996 containerd[1823]: 2025-01-13 22:26:18.202 [INFO][5366] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86" Namespace="kube-system" Pod="coredns-76f75df574-7jn9j" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:18.215548 containerd[1823]: time="2025-01-13T22:26:18.215467141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:26:18.215689 containerd[1823]: time="2025-01-13T22:26:18.215674843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:26:18.215689 containerd[1823]: time="2025-01-13T22:26:18.215684813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:18.215769 containerd[1823]: time="2025-01-13T22:26:18.215726086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:18.237613 systemd[1]: Started cri-containerd-1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86.scope - libcontainer container 1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86. Jan 13 22:26:18.254548 systemd-networkd[1621]: cali4c140b65f8c: Gained IPv6LL Jan 13 22:26:18.265036 containerd[1823]: time="2025-01-13T22:26:18.265009937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7jn9j,Uid:207cec62-0844-4486-bfd5-d28a764a464c,Namespace:kube-system,Attempt:1,} returns sandbox id \"1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86\"" Jan 13 22:26:18.266546 containerd[1823]: time="2025-01-13T22:26:18.266526553Z" level=info msg="CreateContainer within sandbox \"1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:26:18.303985 containerd[1823]: time="2025-01-13T22:26:18.303928057Z" level=info msg="CreateContainer within sandbox \"1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8060b77afccdca35f0ecc0edafe84ae0bc8ed92a048a4f67352d3cae11dc07f5\"" Jan 13 22:26:18.304209 containerd[1823]: time="2025-01-13T22:26:18.304196054Z" level=info msg="StartContainer for \"8060b77afccdca35f0ecc0edafe84ae0bc8ed92a048a4f67352d3cae11dc07f5\"" Jan 13 22:26:18.323587 systemd[1]: Started cri-containerd-8060b77afccdca35f0ecc0edafe84ae0bc8ed92a048a4f67352d3cae11dc07f5.scope - libcontainer container 8060b77afccdca35f0ecc0edafe84ae0bc8ed92a048a4f67352d3cae11dc07f5. Jan 13 22:26:18.334666 containerd[1823]: time="2025-01-13T22:26:18.334640878Z" level=info msg="StartContainer for \"8060b77afccdca35f0ecc0edafe84ae0bc8ed92a048a4f67352d3cae11dc07f5\" returns successfully" Jan 13 22:26:18.578080 containerd[1823]: time="2025-01-13T22:26:18.578003382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:18.578167 containerd[1823]: time="2025-01-13T22:26:18.578143937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 22:26:18.578551 containerd[1823]: time="2025-01-13T22:26:18.578534494Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:18.579732 containerd[1823]: time="2025-01-13T22:26:18.579719553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:18.580535 containerd[1823]: time="2025-01-13T22:26:18.580518566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.305557545s" Jan 13 22:26:18.580584 containerd[1823]: time="2025-01-13T22:26:18.580538015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 22:26:18.580878 containerd[1823]: time="2025-01-13T22:26:18.580868742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 22:26:18.581482 containerd[1823]: time="2025-01-13T22:26:18.581467409Z" level=info msg="CreateContainer within sandbox \"f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 22:26:18.586213 containerd[1823]: time="2025-01-13T22:26:18.586192745Z" level=info msg="CreateContainer within sandbox \"f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3bb843467bbd95ad800eda2592251ef6490854ccce4b4501adf3b99843242e0b\"" Jan 13 22:26:18.586475 containerd[1823]: time="2025-01-13T22:26:18.586463226Z" level=info msg="StartContainer for \"3bb843467bbd95ad800eda2592251ef6490854ccce4b4501adf3b99843242e0b\"" Jan 13 22:26:18.611764 systemd[1]: Started cri-containerd-3bb843467bbd95ad800eda2592251ef6490854ccce4b4501adf3b99843242e0b.scope - libcontainer container 3bb843467bbd95ad800eda2592251ef6490854ccce4b4501adf3b99843242e0b. Jan 13 22:26:18.638584 containerd[1823]: time="2025-01-13T22:26:18.638561611Z" level=info msg="StartContainer for \"3bb843467bbd95ad800eda2592251ef6490854ccce4b4501adf3b99843242e0b\" returns successfully" Jan 13 22:26:19.090514 containerd[1823]: time="2025-01-13T22:26:19.090416329Z" level=info msg="StopPodSandbox for \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\"" Jan 13 22:26:19.090729 containerd[1823]: time="2025-01-13T22:26:19.090442966Z" level=info msg="StopPodSandbox for \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\"" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.139 [INFO][5635] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" iface="eth0" netns="/var/run/netns/cni-7ef9793d-479f-80ff-fcd2-5f8f8d4c063f" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" iface="eth0" netns="/var/run/netns/cni-7ef9793d-479f-80ff-fcd2-5f8f8d4c063f" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5635] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" iface="eth0" netns="/var/run/netns/cni-7ef9793d-479f-80ff-fcd2-5f8f8d4c063f" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5635] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.150 [INFO][5663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.151 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.151 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.155 [WARNING][5663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.155 [INFO][5663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.156 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:19.157387 containerd[1823]: 2025-01-13 22:26:19.156 [INFO][5635] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:19.157759 containerd[1823]: time="2025-01-13T22:26:19.157464065Z" level=info msg="TearDown network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\" successfully" Jan 13 22:26:19.157759 containerd[1823]: time="2025-01-13T22:26:19.157482509Z" level=info msg="StopPodSandbox for \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\" returns successfully" Jan 13 22:26:19.157989 containerd[1823]: time="2025-01-13T22:26:19.157949961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-xfvlc,Uid:f6346afb-6034-4ffe-a552-8d2d3bc5a71c,Namespace:calico-apiserver,Attempt:1,}" Jan 13 22:26:19.159395 systemd[1]: run-netns-cni\x2d7ef9793d\x2d479f\x2d80ff\x2dfcd2\x2d5f8f8d4c063f.mount: Deactivated successfully. Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5634] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5634] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" iface="eth0" netns="/var/run/netns/cni-5c1431b1-8858-2d22-d108-4791f64b4b09" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5634] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" iface="eth0" netns="/var/run/netns/cni-5c1431b1-8858-2d22-d108-4791f64b4b09" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5634] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" iface="eth0" netns="/var/run/netns/cni-5c1431b1-8858-2d22-d108-4791f64b4b09" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5634] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.140 [INFO][5634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.150 [INFO][5664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.151 [INFO][5664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.156 [INFO][5664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.159 [WARNING][5664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.159 [INFO][5664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.160 [INFO][5664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:19.162324 containerd[1823]: 2025-01-13 22:26:19.161 [INFO][5634] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:19.162591 containerd[1823]: time="2025-01-13T22:26:19.162379153Z" level=info msg="TearDown network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\" successfully" Jan 13 22:26:19.162591 containerd[1823]: time="2025-01-13T22:26:19.162397361Z" level=info msg="StopPodSandbox for \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\" returns successfully" Jan 13 22:26:19.162948 containerd[1823]: time="2025-01-13T22:26:19.162905048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-789dw,Uid:fddf42a2-d398-43f9-a55a-0d2b23c7af2d,Namespace:kube-system,Attempt:1,}" Jan 13 22:26:19.166270 systemd[1]: run-netns-cni\x2d5c1431b1\x2d8858\x2d2d22\x2dd108\x2d4791f64b4b09.mount: Deactivated successfully. Jan 13 22:26:19.215504 kubelet[3252]: I0113 22:26:19.215480 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8cc7c86dd-gr6v6" podStartSLOduration=21.9094772 podStartE2EDuration="24.215448499s" podCreationTimestamp="2025-01-13 22:25:55 +0000 UTC" firstStartedPulling="2025-01-13 22:26:16.274776149 +0000 UTC m=+40.229250639" lastFinishedPulling="2025-01-13 22:26:18.580747454 +0000 UTC m=+42.535221938" observedRunningTime="2025-01-13 22:26:19.21541767 +0000 UTC m=+43.169892153" watchObservedRunningTime="2025-01-13 22:26:19.215448499 +0000 UTC m=+43.169922979" Jan 13 22:26:19.220836 kubelet[3252]: I0113 22:26:19.220814 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7jn9j" podStartSLOduration=29.220780926 podStartE2EDuration="29.220780926s" podCreationTimestamp="2025-01-13 22:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:26:19.220441593 +0000 UTC m=+43.174916087" watchObservedRunningTime="2025-01-13 22:26:19.220780926 +0000 UTC m=+43.175255407" Jan 13 22:26:19.221233 systemd-networkd[1621]: cali10c0a5f3d0a: Link UP Jan 13 22:26:19.222978 systemd-networkd[1621]: cali10c0a5f3d0a: Gained carrier Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.173 [INFO][5698] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.179 [INFO][5698] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0 calico-apiserver-8cc7c86dd- calico-apiserver f6346afb-6034-4ffe-a552-8d2d3bc5a71c 788 0 2025-01-13 22:25:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cc7c86dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-8862dc3d2a calico-apiserver-8cc7c86dd-xfvlc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali10c0a5f3d0a [] []}} ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.179 [INFO][5698] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.195 [INFO][5741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" HandleID="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.202 [INFO][5741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" HandleID="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c87c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-8862dc3d2a", "pod":"calico-apiserver-8cc7c86dd-xfvlc", "timestamp":"2025-01-13 22:26:19.195309778 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8862dc3d2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.202 [INFO][5741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.202 [INFO][5741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.202 [INFO][5741] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8862dc3d2a' Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.203 [INFO][5741] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.206 [INFO][5741] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.209 [INFO][5741] ipam/ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.210 [INFO][5741] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.212 [INFO][5741] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.212 [INFO][5741] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.213 [INFO][5741] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2 Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.215 [INFO][5741] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.218 [INFO][5741] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.196/26] block=192.168.97.192/26 handle="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.218 [INFO][5741] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.196/26] handle="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.218 [INFO][5741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:19.227443 containerd[1823]: 2025-01-13 22:26:19.218 [INFO][5741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.196/26] IPv6=[] ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" HandleID="k8s-pod-network.2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.228082 containerd[1823]: 2025-01-13 22:26:19.219 [INFO][5698] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6346afb-6034-4ffe-a552-8d2d3bc5a71c", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"", Pod:"calico-apiserver-8cc7c86dd-xfvlc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10c0a5f3d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:19.228082 containerd[1823]: 2025-01-13 22:26:19.220 [INFO][5698] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.196/32] ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.228082 containerd[1823]: 2025-01-13 22:26:19.220 [INFO][5698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10c0a5f3d0a ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.228082 containerd[1823]: 2025-01-13 22:26:19.221 [INFO][5698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.228082 containerd[1823]: 2025-01-13 22:26:19.221 [INFO][5698] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6346afb-6034-4ffe-a552-8d2d3bc5a71c", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2", Pod:"calico-apiserver-8cc7c86dd-xfvlc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10c0a5f3d0a", MAC:"6e:8c:15:2b:a3:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:19.228082 containerd[1823]: 2025-01-13 22:26:19.226 [INFO][5698] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2" Namespace="calico-apiserver" Pod="calico-apiserver-8cc7c86dd-xfvlc" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:19.234806 systemd-networkd[1621]: cali3f184ff7340: Link UP Jan 13 22:26:19.234906 systemd-networkd[1621]: cali3f184ff7340: Gained carrier Jan 13 22:26:19.237055 containerd[1823]: time="2025-01-13T22:26:19.236787645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:26:19.237055 containerd[1823]: time="2025-01-13T22:26:19.237015943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:26:19.237055 containerd[1823]: time="2025-01-13T22:26:19.237028004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:19.237168 containerd[1823]: time="2025-01-13T22:26:19.237076440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.178 [INFO][5711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.184 [INFO][5711] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0 coredns-76f75df574- kube-system fddf42a2-d398-43f9-a55a-0d2b23c7af2d 789 0 2025-01-13 22:25:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-8862dc3d2a coredns-76f75df574-789dw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f184ff7340 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.184 [INFO][5711] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.199 [INFO][5746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" HandleID="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.204 [INFO][5746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" HandleID="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006338e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-8862dc3d2a", "pod":"coredns-76f75df574-789dw", "timestamp":"2025-01-13 22:26:19.199357983 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8862dc3d2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.204 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.218 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.218 [INFO][5746] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8862dc3d2a' Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.219 [INFO][5746] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.222 [INFO][5746] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.224 [INFO][5746] ipam/ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.225 [INFO][5746] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.227 [INFO][5746] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.227 [INFO][5746] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.227 [INFO][5746] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914 Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.229 [INFO][5746] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.233 [INFO][5746] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.197/26] block=192.168.97.192/26 handle="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.233 [INFO][5746] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.197/26] handle="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.233 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:19.240722 containerd[1823]: 2025-01-13 22:26:19.233 [INFO][5746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.197/26] IPv6=[] ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" HandleID="k8s-pod-network.5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.241139 containerd[1823]: 2025-01-13 22:26:19.234 [INFO][5711] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fddf42a2-d398-43f9-a55a-0d2b23c7af2d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"", Pod:"coredns-76f75df574-789dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f184ff7340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:19.241139 containerd[1823]: 2025-01-13 22:26:19.234 [INFO][5711] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.197/32] ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.241139 containerd[1823]: 2025-01-13 22:26:19.234 [INFO][5711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f184ff7340 ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.241139 containerd[1823]: 2025-01-13 22:26:19.234 [INFO][5711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.241139 containerd[1823]: 2025-01-13 22:26:19.235 [INFO][5711] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fddf42a2-d398-43f9-a55a-0d2b23c7af2d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914", Pod:"coredns-76f75df574-789dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f184ff7340", MAC:"f6:1c:5b:75:3b:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:19.241139 containerd[1823]: 2025-01-13 22:26:19.239 [INFO][5711] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914" Namespace="kube-system" Pod="coredns-76f75df574-789dw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:19.250082 containerd[1823]: time="2025-01-13T22:26:19.250013922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:26:19.250082 containerd[1823]: time="2025-01-13T22:26:19.250042049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:26:19.250082 containerd[1823]: time="2025-01-13T22:26:19.250049203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:19.250193 containerd[1823]: time="2025-01-13T22:26:19.250089543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:19.257631 systemd[1]: Started cri-containerd-2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2.scope - libcontainer container 2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2. Jan 13 22:26:19.259315 systemd[1]: Started cri-containerd-5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914.scope - libcontainer container 5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914. Jan 13 22:26:19.279899 containerd[1823]: time="2025-01-13T22:26:19.279873020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cc7c86dd-xfvlc,Uid:f6346afb-6034-4ffe-a552-8d2d3bc5a71c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2\"" Jan 13 22:26:19.280156 containerd[1823]: time="2025-01-13T22:26:19.280144206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-789dw,Uid:fddf42a2-d398-43f9-a55a-0d2b23c7af2d,Namespace:kube-system,Attempt:1,} returns sandbox id \"5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914\"" Jan 13 22:26:19.281122 containerd[1823]: time="2025-01-13T22:26:19.281108592Z" level=info msg="CreateContainer within sandbox \"2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 22:26:19.281163 containerd[1823]: time="2025-01-13T22:26:19.281153420Z" level=info msg="CreateContainer within sandbox \"5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:26:19.285917 containerd[1823]: time="2025-01-13T22:26:19.285871938Z" level=info msg="CreateContainer within sandbox \"2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0f6867a59c5d081ebda414b82ce7fcb35a5d2d99c77081da9c168a7f366fcd67\"" Jan 13 22:26:19.286140 containerd[1823]: time="2025-01-13T22:26:19.286126784Z" level=info msg="StartContainer for \"0f6867a59c5d081ebda414b82ce7fcb35a5d2d99c77081da9c168a7f366fcd67\"" Jan 13 22:26:19.286731 containerd[1823]: time="2025-01-13T22:26:19.286717881Z" level=info msg="CreateContainer within sandbox \"5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d8ebee7f2de83259587380063d2c227084298cfc172794d852b5737e6f95927\"" Jan 13 22:26:19.286924 containerd[1823]: time="2025-01-13T22:26:19.286911525Z" level=info msg="StartContainer for \"0d8ebee7f2de83259587380063d2c227084298cfc172794d852b5737e6f95927\"" Jan 13 22:26:19.312746 systemd[1]: Started cri-containerd-0d8ebee7f2de83259587380063d2c227084298cfc172794d852b5737e6f95927.scope - libcontainer container 0d8ebee7f2de83259587380063d2c227084298cfc172794d852b5737e6f95927. Jan 13 22:26:19.313331 systemd[1]: Started cri-containerd-0f6867a59c5d081ebda414b82ce7fcb35a5d2d99c77081da9c168a7f366fcd67.scope - libcontainer container 0f6867a59c5d081ebda414b82ce7fcb35a5d2d99c77081da9c168a7f366fcd67. Jan 13 22:26:19.327706 containerd[1823]: time="2025-01-13T22:26:19.327677475Z" level=info msg="StartContainer for \"0d8ebee7f2de83259587380063d2c227084298cfc172794d852b5737e6f95927\" returns successfully" Jan 13 22:26:19.337495 containerd[1823]: time="2025-01-13T22:26:19.337475248Z" level=info msg="StartContainer for \"0f6867a59c5d081ebda414b82ce7fcb35a5d2d99c77081da9c168a7f366fcd67\" returns successfully" Jan 13 22:26:19.451355 kubelet[3252]: I0113 22:26:19.451272 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:26:19.586457 kernel: bpftool[6008]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 22:26:19.743197 systemd-networkd[1621]: vxlan.calico: Link UP Jan 13 22:26:19.743201 systemd-networkd[1621]: vxlan.calico: Gained carrier Jan 13 22:26:20.090341 containerd[1823]: time="2025-01-13T22:26:20.090258438Z" level=info msg="StopPodSandbox for \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\"" Jan 13 22:26:20.110778 systemd-networkd[1621]: cali244f8ceb57e: Gained IPv6LL Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.146 [INFO][6198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.146 [INFO][6198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" iface="eth0" netns="/var/run/netns/cni-88e9d653-3416-8df4-5526-81fffb538950" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.146 [INFO][6198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" iface="eth0" netns="/var/run/netns/cni-88e9d653-3416-8df4-5526-81fffb538950" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.147 [INFO][6198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" iface="eth0" netns="/var/run/netns/cni-88e9d653-3416-8df4-5526-81fffb538950" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.147 [INFO][6198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.147 [INFO][6198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.161 [INFO][6213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.161 [INFO][6213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.161 [INFO][6213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.165 [WARNING][6213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.165 [INFO][6213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.166 [INFO][6213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:20.168577 containerd[1823]: 2025-01-13 22:26:20.167 [INFO][6198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:20.168911 containerd[1823]: time="2025-01-13T22:26:20.168641588Z" level=info msg="TearDown network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\" successfully" Jan 13 22:26:20.168911 containerd[1823]: time="2025-01-13T22:26:20.168662752Z" level=info msg="StopPodSandbox for \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\" returns successfully" Jan 13 22:26:20.169162 containerd[1823]: time="2025-01-13T22:26:20.169119373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zncfw,Uid:95289ad8-9d9d-491a-a4ad-267a7ba7fe8b,Namespace:calico-system,Attempt:1,}" Jan 13 22:26:20.170404 systemd[1]: run-netns-cni\x2d88e9d653\x2d3416\x2d8df4\x2d5526\x2d81fffb538950.mount: Deactivated successfully. Jan 13 22:26:20.213752 kubelet[3252]: I0113 22:26:20.213736 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:26:20.223831 kubelet[3252]: I0113 22:26:20.223809 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8cc7c86dd-xfvlc" podStartSLOduration=25.223775752 podStartE2EDuration="25.223775752s" podCreationTimestamp="2025-01-13 22:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:26:20.218553418 +0000 UTC m=+44.173027902" watchObservedRunningTime="2025-01-13 22:26:20.223775752 +0000 UTC m=+44.178250233" Jan 13 22:26:20.228364 systemd-networkd[1621]: calia5905512051: Link UP Jan 13 22:26:20.228499 systemd-networkd[1621]: calia5905512051: Gained carrier Jan 13 22:26:20.230080 kubelet[3252]: I0113 22:26:20.230060 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-789dw" podStartSLOduration=30.230032225 podStartE2EDuration="30.230032225s" podCreationTimestamp="2025-01-13 22:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:26:20.229951599 +0000 UTC m=+44.184426083" watchObservedRunningTime="2025-01-13 22:26:20.230032225 +0000 UTC m=+44.184506705" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.189 [INFO][6227] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0 csi-node-driver- calico-system 95289ad8-9d9d-491a-a4ad-267a7ba7fe8b 819 0 2025-01-13 22:25:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-8862dc3d2a csi-node-driver-zncfw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia5905512051 [] []}} ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.189 [INFO][6227] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.204 [INFO][6247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" HandleID="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.209 [INFO][6247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" HandleID="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051c90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-8862dc3d2a", "pod":"csi-node-driver-zncfw", "timestamp":"2025-01-13 22:26:20.204648857 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8862dc3d2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.209 [INFO][6247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.209 [INFO][6247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.209 [INFO][6247] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8862dc3d2a' Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.210 [INFO][6247] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.212 [INFO][6247] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.214 [INFO][6247] ipam/ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.215 [INFO][6247] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.217 [INFO][6247] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.217 [INFO][6247] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.218 [INFO][6247] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0 Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.220 [INFO][6247] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.226 [INFO][6247] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.198/26] block=192.168.97.192/26 handle="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.226 [INFO][6247] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.198/26] handle="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" host="ci-4081.3.0-a-8862dc3d2a" Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.226 [INFO][6247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:20.233789 containerd[1823]: 2025-01-13 22:26:20.226 [INFO][6247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.198/26] IPv6=[] ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" HandleID="k8s-pod-network.2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.234323 containerd[1823]: 2025-01-13 22:26:20.227 [INFO][6227] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"", Pod:"csi-node-driver-zncfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5905512051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:20.234323 containerd[1823]: 2025-01-13 22:26:20.227 [INFO][6227] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.198/32] ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.234323 containerd[1823]: 2025-01-13 22:26:20.227 [INFO][6227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5905512051 ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.234323 containerd[1823]: 2025-01-13 22:26:20.228 [INFO][6227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.234323 containerd[1823]: 2025-01-13 22:26:20.228 [INFO][6227] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0", Pod:"csi-node-driver-zncfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5905512051", MAC:"da:67:f6:0c:56:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:20.234323 containerd[1823]: 2025-01-13 22:26:20.233 [INFO][6227] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0" Namespace="calico-system" Pod="csi-node-driver-zncfw" WorkloadEndpoint="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:20.242804 containerd[1823]: time="2025-01-13T22:26:20.242762941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:26:20.242804 containerd[1823]: time="2025-01-13T22:26:20.242794932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:26:20.242804 containerd[1823]: time="2025-01-13T22:26:20.242801968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:20.242902 containerd[1823]: time="2025-01-13T22:26:20.242843360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:26:20.256638 systemd[1]: Started cri-containerd-2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0.scope - libcontainer container 2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0. Jan 13 22:26:20.269249 containerd[1823]: time="2025-01-13T22:26:20.269227662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zncfw,Uid:95289ad8-9d9d-491a-a4ad-267a7ba7fe8b,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0\"" Jan 13 22:26:20.945274 containerd[1823]: time="2025-01-13T22:26:20.945214527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:20.945390 containerd[1823]: time="2025-01-13T22:26:20.945371059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 22:26:20.945736 containerd[1823]: time="2025-01-13T22:26:20.945697409Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:20.947163 containerd[1823]: time="2025-01-13T22:26:20.947122990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:20.947414 containerd[1823]: time="2025-01-13T22:26:20.947377074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.366494506s" Jan 13 22:26:20.947414 containerd[1823]: time="2025-01-13T22:26:20.947392392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 22:26:20.947689 containerd[1823]: time="2025-01-13T22:26:20.947678177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 22:26:20.950621 containerd[1823]: time="2025-01-13T22:26:20.950606009Z" level=info msg="CreateContainer within sandbox \"f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 22:26:20.954717 containerd[1823]: time="2025-01-13T22:26:20.954675705Z" level=info msg="CreateContainer within sandbox \"f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"86cedbfc106f27c5335ba5475091f98107c2eff54cd2d85c2f2285bd015a9447\"" Jan 13 22:26:20.954897 containerd[1823]: time="2025-01-13T22:26:20.954886967Z" level=info msg="StartContainer for \"86cedbfc106f27c5335ba5475091f98107c2eff54cd2d85c2f2285bd015a9447\"" Jan 13 22:26:20.976734 systemd[1]: Started cri-containerd-86cedbfc106f27c5335ba5475091f98107c2eff54cd2d85c2f2285bd015a9447.scope - libcontainer container 86cedbfc106f27c5335ba5475091f98107c2eff54cd2d85c2f2285bd015a9447. Jan 13 22:26:21.000997 containerd[1823]: time="2025-01-13T22:26:21.000973414Z" level=info msg="StartContainer for \"86cedbfc106f27c5335ba5475091f98107c2eff54cd2d85c2f2285bd015a9447\" returns successfully" Jan 13 22:26:21.006602 systemd-networkd[1621]: cali10c0a5f3d0a: Gained IPv6LL Jan 13 22:26:21.134755 systemd-networkd[1621]: cali3f184ff7340: Gained IPv6LL Jan 13 22:26:21.229926 kubelet[3252]: I0113 22:26:21.229820 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58696dffd4-626g8" podStartSLOduration=21.564132579 podStartE2EDuration="25.229773236s" podCreationTimestamp="2025-01-13 22:25:56 +0000 UTC" firstStartedPulling="2025-01-13 22:26:17.281948288 +0000 UTC m=+41.236422776" lastFinishedPulling="2025-01-13 22:26:20.947588946 +0000 UTC m=+44.902063433" observedRunningTime="2025-01-13 22:26:21.229147011 +0000 UTC m=+45.183621513" watchObservedRunningTime="2025-01-13 22:26:21.229773236 +0000 UTC m=+45.184247734" Jan 13 22:26:21.646773 systemd-networkd[1621]: vxlan.calico: Gained IPv6LL Jan 13 22:26:22.158744 systemd-networkd[1621]: calia5905512051: Gained IPv6LL Jan 13 22:26:22.720684 containerd[1823]: time="2025-01-13T22:26:22.720659791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:22.720926 containerd[1823]: time="2025-01-13T22:26:22.720891678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 22:26:22.721271 containerd[1823]: time="2025-01-13T22:26:22.721259252Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:22.722157 containerd[1823]: time="2025-01-13T22:26:22.722146311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:22.722574 containerd[1823]: time="2025-01-13T22:26:22.722561189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.774865146s" Jan 13 22:26:22.722596 containerd[1823]: time="2025-01-13T22:26:22.722577854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 22:26:22.723485 containerd[1823]: time="2025-01-13T22:26:22.723475178Z" level=info msg="CreateContainer within sandbox \"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 22:26:22.729708 containerd[1823]: time="2025-01-13T22:26:22.729682926Z" level=info msg="CreateContainer within sandbox \"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1e93eb3dabf37e54834fed9089672b3976e7f5a7ccca34ebb215e7176f9ab0c9\"" Jan 13 22:26:22.730003 containerd[1823]: time="2025-01-13T22:26:22.729988116Z" level=info msg="StartContainer for \"1e93eb3dabf37e54834fed9089672b3976e7f5a7ccca34ebb215e7176f9ab0c9\"" Jan 13 22:26:22.759653 systemd[1]: Started cri-containerd-1e93eb3dabf37e54834fed9089672b3976e7f5a7ccca34ebb215e7176f9ab0c9.scope - libcontainer container 1e93eb3dabf37e54834fed9089672b3976e7f5a7ccca34ebb215e7176f9ab0c9. Jan 13 22:26:22.773272 containerd[1823]: time="2025-01-13T22:26:22.773242348Z" level=info msg="StartContainer for \"1e93eb3dabf37e54834fed9089672b3976e7f5a7ccca34ebb215e7176f9ab0c9\" returns successfully" Jan 13 22:26:22.773947 containerd[1823]: time="2025-01-13T22:26:22.773929693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 22:26:24.664344 containerd[1823]: time="2025-01-13T22:26:24.664317808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:24.664681 containerd[1823]: time="2025-01-13T22:26:24.664605152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 22:26:24.665017 containerd[1823]: time="2025-01-13T22:26:24.665005263Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:24.665987 containerd[1823]: time="2025-01-13T22:26:24.665972388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:26:24.666367 containerd[1823]: time="2025-01-13T22:26:24.666342536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.892389207s" Jan 13 22:26:24.666403 containerd[1823]: time="2025-01-13T22:26:24.666369691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 22:26:24.667586 containerd[1823]: time="2025-01-13T22:26:24.667508521Z" level=info msg="CreateContainer within sandbox \"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 22:26:24.674688 containerd[1823]: time="2025-01-13T22:26:24.674671170Z" level=info msg="CreateContainer within sandbox \"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"00120b82a35ba4d44c2b938af008ee472db85e2df7578b3fe5b111091122c14b\"" Jan 13 22:26:24.674993 containerd[1823]: time="2025-01-13T22:26:24.674978168Z" level=info msg="StartContainer for \"00120b82a35ba4d44c2b938af008ee472db85e2df7578b3fe5b111091122c14b\"" Jan 13 22:26:24.706728 systemd[1]: Started cri-containerd-00120b82a35ba4d44c2b938af008ee472db85e2df7578b3fe5b111091122c14b.scope - libcontainer container 00120b82a35ba4d44c2b938af008ee472db85e2df7578b3fe5b111091122c14b. Jan 13 22:26:24.719217 containerd[1823]: time="2025-01-13T22:26:24.719184717Z" level=info msg="StartContainer for \"00120b82a35ba4d44c2b938af008ee472db85e2df7578b3fe5b111091122c14b\" returns successfully" Jan 13 22:26:25.132304 kubelet[3252]: I0113 22:26:25.132211 3252 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 22:26:25.132304 kubelet[3252]: I0113 22:26:25.132284 3252 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 22:26:25.259339 kubelet[3252]: I0113 22:26:25.259277 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-zncfw" podStartSLOduration=25.86235874 podStartE2EDuration="30.259151421s" podCreationTimestamp="2025-01-13 22:25:55 +0000 UTC" firstStartedPulling="2025-01-13 22:26:20.269803939 +0000 UTC m=+44.224278421" lastFinishedPulling="2025-01-13 22:26:24.666596614 +0000 UTC m=+48.621071102" observedRunningTime="2025-01-13 22:26:25.258061596 +0000 UTC m=+49.212536158" watchObservedRunningTime="2025-01-13 22:26:25.259151421 +0000 UTC m=+49.213625963" Jan 13 22:26:36.086540 containerd[1823]: time="2025-01-13T22:26:36.086454013Z" level=info msg="StopPodSandbox for \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\"" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.107 [WARNING][6528] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"207cec62-0844-4486-bfd5-d28a764a464c", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86", Pod:"coredns-76f75df574-7jn9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244f8ceb57e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.107 [INFO][6528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.107 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" iface="eth0" netns="" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.107 [INFO][6528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.107 [INFO][6528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.120 [INFO][6542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.120 [INFO][6542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.120 [INFO][6542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.125 [WARNING][6542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.125 [INFO][6542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.126 [INFO][6542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.128619 containerd[1823]: 2025-01-13 22:26:36.127 [INFO][6528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.129039 containerd[1823]: time="2025-01-13T22:26:36.128645186Z" level=info msg="TearDown network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\" successfully" Jan 13 22:26:36.129039 containerd[1823]: time="2025-01-13T22:26:36.128663283Z" level=info msg="StopPodSandbox for \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\" returns successfully" Jan 13 22:26:36.129039 containerd[1823]: time="2025-01-13T22:26:36.128906286Z" level=info msg="RemovePodSandbox for \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\"" Jan 13 22:26:36.129039 containerd[1823]: time="2025-01-13T22:26:36.128926926Z" level=info msg="Forcibly stopping sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\"" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.153 [WARNING][6572] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"207cec62-0844-4486-bfd5-d28a764a464c", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"1d4ffc9981ec8762660e72bc325f2402e08d829ae5a1a14b0b2dd7ad656fdf86", Pod:"coredns-76f75df574-7jn9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244f8ceb57e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.153 [INFO][6572] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.153 [INFO][6572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" iface="eth0" netns="" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.153 [INFO][6572] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.153 [INFO][6572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.167 [INFO][6588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.168 [INFO][6588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.168 [INFO][6588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.173 [WARNING][6588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.173 [INFO][6588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" HandleID="k8s-pod-network.22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--7jn9j-eth0" Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.177 [INFO][6588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.182329 containerd[1823]: 2025-01-13 22:26:36.179 [INFO][6572] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a" Jan 13 22:26:36.183926 containerd[1823]: time="2025-01-13T22:26:36.182415543Z" level=info msg="TearDown network for sandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\" successfully" Jan 13 22:26:36.186349 containerd[1823]: time="2025-01-13T22:26:36.186336130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:26:36.186377 containerd[1823]: time="2025-01-13T22:26:36.186366314Z" level=info msg="RemovePodSandbox \"22553e8822fcb6222569dc6d867c3bd9e25a5da8f466bbbe148470ebf855154a\" returns successfully" Jan 13 22:26:36.186695 containerd[1823]: time="2025-01-13T22:26:36.186665005Z" level=info msg="StopPodSandbox for \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\"" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.205 [WARNING][6618] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0", Pod:"csi-node-driver-zncfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5905512051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.205 [INFO][6618] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.205 [INFO][6618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" iface="eth0" netns="" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.205 [INFO][6618] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.205 [INFO][6618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.215 [INFO][6631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.215 [INFO][6631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.215 [INFO][6631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.219 [WARNING][6631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.219 [INFO][6631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.220 [INFO][6631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.221832 containerd[1823]: 2025-01-13 22:26:36.221 [INFO][6618] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.221832 containerd[1823]: time="2025-01-13T22:26:36.221823821Z" level=info msg="TearDown network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\" successfully" Jan 13 22:26:36.221832 containerd[1823]: time="2025-01-13T22:26:36.221838993Z" level=info msg="StopPodSandbox for \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\" returns successfully" Jan 13 22:26:36.222162 containerd[1823]: time="2025-01-13T22:26:36.222116612Z" level=info msg="RemovePodSandbox for \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\"" Jan 13 22:26:36.222162 containerd[1823]: time="2025-01-13T22:26:36.222130699Z" level=info msg="Forcibly stopping sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\"" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.241 [WARNING][6661] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95289ad8-9d9d-491a-a4ad-267a7ba7fe8b", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"2f4b9f771ee5c43f57dc165ef28f2b7538ab8db17835c05bd1edb77f58807ab0", Pod:"csi-node-driver-zncfw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia5905512051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.242 [INFO][6661] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.242 [INFO][6661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" iface="eth0" netns="" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.242 [INFO][6661] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.242 [INFO][6661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.252 [INFO][6676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.252 [INFO][6676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.252 [INFO][6676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.256 [WARNING][6676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.256 [INFO][6676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" HandleID="k8s-pod-network.87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-csi--node--driver--zncfw-eth0" Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.257 [INFO][6676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.259344 containerd[1823]: 2025-01-13 22:26:36.258 [INFO][6661] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f" Jan 13 22:26:36.259688 containerd[1823]: time="2025-01-13T22:26:36.259367561Z" level=info msg="TearDown network for sandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\" successfully" Jan 13 22:26:36.260723 containerd[1823]: time="2025-01-13T22:26:36.260711509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:26:36.260750 containerd[1823]: time="2025-01-13T22:26:36.260740691Z" level=info msg="RemovePodSandbox \"87c396ca487df64164941733e7e97bf97546d6db5a4da45f83425d121856278f\" returns successfully" Jan 13 22:26:36.260984 containerd[1823]: time="2025-01-13T22:26:36.260973410Z" level=info msg="StopPodSandbox for \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\"" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.279 [WARNING][6704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0", GenerateName:"calico-kube-controllers-58696dffd4-", Namespace:"calico-system", SelfLink:"", UID:"0e56f5ab-a1b4-4441-b350-aad158e7cc51", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58696dffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0", Pod:"calico-kube-controllers-58696dffd4-626g8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c140b65f8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.279 [INFO][6704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.279 [INFO][6704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" iface="eth0" netns="" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.279 [INFO][6704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.279 [INFO][6704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.289 [INFO][6721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.289 [INFO][6721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.289 [INFO][6721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.293 [WARNING][6721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.293 [INFO][6721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.295 [INFO][6721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.296622 containerd[1823]: 2025-01-13 22:26:36.295 [INFO][6704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.296622 containerd[1823]: time="2025-01-13T22:26:36.296608725Z" level=info msg="TearDown network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\" successfully" Jan 13 22:26:36.296622 containerd[1823]: time="2025-01-13T22:26:36.296625906Z" level=info msg="StopPodSandbox for \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\" returns successfully" Jan 13 22:26:36.296966 containerd[1823]: time="2025-01-13T22:26:36.296872729Z" level=info msg="RemovePodSandbox for \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\"" Jan 13 22:26:36.296966 containerd[1823]: time="2025-01-13T22:26:36.296892384Z" level=info msg="Forcibly stopping sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\"" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.318 [WARNING][6750] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0", GenerateName:"calico-kube-controllers-58696dffd4-", Namespace:"calico-system", SelfLink:"", UID:"0e56f5ab-a1b4-4441-b350-aad158e7cc51", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58696dffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"f93f27b27d7fa8445a822028a1db4817411b71b7bf01e485e32ea4b1c35719d0", Pod:"calico-kube-controllers-58696dffd4-626g8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c140b65f8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.318 [INFO][6750] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.318 [INFO][6750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" iface="eth0" netns="" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.318 [INFO][6750] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.318 [INFO][6750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.330 [INFO][6763] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.330 [INFO][6763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.331 [INFO][6763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.335 [WARNING][6763] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.335 [INFO][6763] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" HandleID="k8s-pod-network.d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--kube--controllers--58696dffd4--626g8-eth0" Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.336 [INFO][6763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.338002 containerd[1823]: 2025-01-13 22:26:36.337 [INFO][6750] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f" Jan 13 22:26:36.338002 containerd[1823]: time="2025-01-13T22:26:36.337950239Z" level=info msg="TearDown network for sandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\" successfully" Jan 13 22:26:36.339455 containerd[1823]: time="2025-01-13T22:26:36.339412610Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:26:36.339455 containerd[1823]: time="2025-01-13T22:26:36.339437944Z" level=info msg="RemovePodSandbox \"d0e9dc45097b694c12bf05e708acf7b1d398d483953c25c2dd9e7465862dd27f\" returns successfully" Jan 13 22:26:36.339749 containerd[1823]: time="2025-01-13T22:26:36.339709608Z" level=info msg="StopPodSandbox for \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\"" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.358 [WARNING][6792] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fddf42a2-d398-43f9-a55a-0d2b23c7af2d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914", Pod:"coredns-76f75df574-789dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f184ff7340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.358 [INFO][6792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.358 [INFO][6792] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" iface="eth0" netns="" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.358 [INFO][6792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.358 [INFO][6792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.369 [INFO][6805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.369 [INFO][6805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.369 [INFO][6805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.373 [WARNING][6805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.373 [INFO][6805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.374 [INFO][6805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.375871 containerd[1823]: 2025-01-13 22:26:36.375 [INFO][6792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.376183 containerd[1823]: time="2025-01-13T22:26:36.375892683Z" level=info msg="TearDown network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\" successfully" Jan 13 22:26:36.376183 containerd[1823]: time="2025-01-13T22:26:36.375909833Z" level=info msg="StopPodSandbox for \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\" returns successfully" Jan 13 22:26:36.376183 containerd[1823]: time="2025-01-13T22:26:36.376178123Z" level=info msg="RemovePodSandbox for \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\"" Jan 13 22:26:36.376237 containerd[1823]: time="2025-01-13T22:26:36.376194927Z" level=info msg="Forcibly stopping sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\"" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.396 [WARNING][6832] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fddf42a2-d398-43f9-a55a-0d2b23c7af2d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"5c20894eded163588fdf7a08a99249bd939c0ce432ce2a76b9cbe87774938914", Pod:"coredns-76f75df574-789dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f184ff7340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.396 [INFO][6832] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.396 [INFO][6832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" iface="eth0" netns="" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.396 [INFO][6832] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.396 [INFO][6832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.408 [INFO][6844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.408 [INFO][6844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.408 [INFO][6844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.412 [WARNING][6844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.412 [INFO][6844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" HandleID="k8s-pod-network.454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-coredns--76f75df574--789dw-eth0" Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.413 [INFO][6844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.414860 containerd[1823]: 2025-01-13 22:26:36.414 [INFO][6832] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64" Jan 13 22:26:36.415202 containerd[1823]: time="2025-01-13T22:26:36.414884692Z" level=info msg="TearDown network for sandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\" successfully" Jan 13 22:26:36.416490 containerd[1823]: time="2025-01-13T22:26:36.416412446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:26:36.416490 containerd[1823]: time="2025-01-13T22:26:36.416440891Z" level=info msg="RemovePodSandbox \"454b8801f5b9ad844363ac7c9625634c2f43c79c4e913d33d4ca51174fa5fe64\" returns successfully" Jan 13 22:26:36.416707 containerd[1823]: time="2025-01-13T22:26:36.416667867Z" level=info msg="StopPodSandbox for \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\"" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.435 [WARNING][6875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"61f640c6-316d-4aef-a0dc-40ac5d65e36f", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500", Pod:"calico-apiserver-8cc7c86dd-gr6v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali606d63ad972", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.435 [INFO][6875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.435 [INFO][6875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" iface="eth0" netns="" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.435 [INFO][6875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.435 [INFO][6875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.446 [INFO][6888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.446 [INFO][6888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.446 [INFO][6888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.450 [WARNING][6888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.450 [INFO][6888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.452 [INFO][6888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.453789 containerd[1823]: 2025-01-13 22:26:36.452 [INFO][6875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.454153 containerd[1823]: time="2025-01-13T22:26:36.453785218Z" level=info msg="TearDown network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\" successfully" Jan 13 22:26:36.454153 containerd[1823]: time="2025-01-13T22:26:36.453807940Z" level=info msg="StopPodSandbox for \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\" returns successfully" Jan 13 22:26:36.454153 containerd[1823]: time="2025-01-13T22:26:36.454083164Z" level=info msg="RemovePodSandbox for \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\"" Jan 13 22:26:36.454153 containerd[1823]: time="2025-01-13T22:26:36.454104217Z" level=info msg="Forcibly stopping sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\"" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.480 [WARNING][6916] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"61f640c6-316d-4aef-a0dc-40ac5d65e36f", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"f1ca869f3353742972b067a0b851f9986a2c811debfd4525d213ef2c665a5500", Pod:"calico-apiserver-8cc7c86dd-gr6v6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali606d63ad972", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.481 [INFO][6916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.481 [INFO][6916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" iface="eth0" netns="" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.481 [INFO][6916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.481 [INFO][6916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.496 [INFO][6930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.496 [INFO][6930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.496 [INFO][6930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.500 [WARNING][6930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.500 [INFO][6930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" HandleID="k8s-pod-network.0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--gr6v6-eth0" Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.501 [INFO][6930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.502409 containerd[1823]: 2025-01-13 22:26:36.501 [INFO][6916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c" Jan 13 22:26:36.502742 containerd[1823]: time="2025-01-13T22:26:36.502426310Z" level=info msg="TearDown network for sandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\" successfully" Jan 13 22:26:36.503839 containerd[1823]: time="2025-01-13T22:26:36.503827259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:26:36.503863 containerd[1823]: time="2025-01-13T22:26:36.503854040Z" level=info msg="RemovePodSandbox \"0cef7e5b74022d3e331ab4f69225310246b044562b1a476166a744c60987c61c\" returns successfully" Jan 13 22:26:36.504130 containerd[1823]: time="2025-01-13T22:26:36.504119852Z" level=info msg="StopPodSandbox for \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\"" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.522 [WARNING][6958] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6346afb-6034-4ffe-a552-8d2d3bc5a71c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2", Pod:"calico-apiserver-8cc7c86dd-xfvlc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10c0a5f3d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.522 [INFO][6958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.522 [INFO][6958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" iface="eth0" netns="" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.522 [INFO][6958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.522 [INFO][6958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.533 [INFO][6969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.534 [INFO][6969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.534 [INFO][6969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.537 [WARNING][6969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.537 [INFO][6969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.538 [INFO][6969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.540010 containerd[1823]: 2025-01-13 22:26:36.539 [INFO][6958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.540010 containerd[1823]: time="2025-01-13T22:26:36.539997644Z" level=info msg="TearDown network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\" successfully" Jan 13 22:26:36.540010 containerd[1823]: time="2025-01-13T22:26:36.540013541Z" level=info msg="StopPodSandbox for \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\" returns successfully" Jan 13 22:26:36.540327 containerd[1823]: time="2025-01-13T22:26:36.540289440Z" level=info msg="RemovePodSandbox for \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\"" Jan 13 22:26:36.540327 containerd[1823]: time="2025-01-13T22:26:36.540303953Z" level=info msg="Forcibly stopping sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\"" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.558 [WARNING][6996] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0", GenerateName:"calico-apiserver-8cc7c86dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6346afb-6034-4ffe-a552-8d2d3bc5a71c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cc7c86dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8862dc3d2a", ContainerID:"2810e3cd3d7a57954201f6804acea2c9a0725b9ff097365dae7c2021f31542b2", Pod:"calico-apiserver-8cc7c86dd-xfvlc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10c0a5f3d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.559 [INFO][6996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.559 [INFO][6996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" iface="eth0" netns="" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.559 [INFO][6996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.559 [INFO][6996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.569 [INFO][7007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.569 [INFO][7007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.569 [INFO][7007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.573 [WARNING][7007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.573 [INFO][7007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" HandleID="k8s-pod-network.a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Workload="ci--4081.3.0--a--8862dc3d2a-k8s-calico--apiserver--8cc7c86dd--xfvlc-eth0" Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.574 [INFO][7007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:26:36.576074 containerd[1823]: 2025-01-13 22:26:36.575 [INFO][6996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621" Jan 13 22:26:36.576375 containerd[1823]: time="2025-01-13T22:26:36.576104846Z" level=info msg="TearDown network for sandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\" successfully" Jan 13 22:26:36.577470 containerd[1823]: time="2025-01-13T22:26:36.577458128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:26:36.577499 containerd[1823]: time="2025-01-13T22:26:36.577484227Z" level=info msg="RemovePodSandbox \"a1a7b89377d14476aba0bbb03c834c265b13331e4861ceefa8e2ca61c4974621\" returns successfully" Jan 13 22:27:00.385132 kubelet[3252]: I0113 22:27:00.385023 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:27:58.372992 update_engine[1818]: I20250113 22:27:58.372852 1818 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 22:27:58.372992 update_engine[1818]: I20250113 22:27:58.372949 1818 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 22:27:58.374587 update_engine[1818]: I20250113 22:27:58.373301 1818 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 22:27:58.374587 update_engine[1818]: I20250113 22:27:58.374250 1818 omaha_request_params.cc:62] Current group set to lts Jan 13 22:27:58.374587 update_engine[1818]: I20250113 22:27:58.374495 1818 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 22:27:58.374587 update_engine[1818]: I20250113 22:27:58.374526 1818 update_attempter.cc:643] Scheduling an action processor start. Jan 13 22:27:58.374587 update_engine[1818]: I20250113 22:27:58.374559 1818 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 22:27:58.374980 update_engine[1818]: I20250113 22:27:58.374626 1818 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 22:27:58.374980 update_engine[1818]: I20250113 22:27:58.374771 1818 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 22:27:58.374980 update_engine[1818]: I20250113 22:27:58.374799 1818 omaha_request_action.cc:272] Request: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: Jan 13 22:27:58.374980 update_engine[1818]: I20250113 22:27:58.374814 1818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 22:27:58.375890 locksmithd[1852]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 22:27:58.377806 update_engine[1818]: I20250113 22:27:58.377769 1818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 22:27:58.377972 update_engine[1818]: I20250113 22:27:58.377933 1818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 22:27:58.378648 update_engine[1818]: E20250113 22:27:58.378602 1818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 22:27:58.378648 update_engine[1818]: I20250113 22:27:58.378634 1818 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 22:28:08.380965 update_engine[1818]: I20250113 22:28:08.380729 1818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 22:28:08.381863 update_engine[1818]: I20250113 22:28:08.381209 1818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 22:28:08.381863 update_engine[1818]: I20250113 22:28:08.381708 1818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 22:28:08.382504 update_engine[1818]: E20250113 22:28:08.382375 1818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 22:28:08.382683 update_engine[1818]: I20250113 22:28:08.382534 1818 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 22:28:18.380840 update_engine[1818]: I20250113 22:28:18.380678 1818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 22:28:18.381743 update_engine[1818]: I20250113 22:28:18.381192 1818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 22:28:18.381743 update_engine[1818]: I20250113 22:28:18.381718 1818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 22:28:18.382691 update_engine[1818]: E20250113 22:28:18.382586 1818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 22:28:18.382878 update_engine[1818]: I20250113 22:28:18.382717 1818 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 22:28:28.380769 update_engine[1818]: I20250113 22:28:28.380609 1818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 22:28:28.381737 update_engine[1818]: I20250113 22:28:28.381130 1818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 22:28:28.381737 update_engine[1818]: I20250113 22:28:28.381650 1818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 22:28:28.382612 update_engine[1818]: E20250113 22:28:28.382503 1818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 22:28:28.382802 update_engine[1818]: I20250113 22:28:28.382634 1818 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 22:28:28.382802 update_engine[1818]: I20250113 22:28:28.382661 1818 omaha_request_action.cc:617] Omaha request response: Jan 13 22:28:28.382988 update_engine[1818]: E20250113 22:28:28.382813 1818 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 22:28:28.382988 update_engine[1818]: I20250113 22:28:28.382860 1818 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 22:28:28.382988 update_engine[1818]: I20250113 22:28:28.382877 1818 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 22:28:28.382988 update_engine[1818]: I20250113 22:28:28.382889 1818 update_attempter.cc:306] Processing Done. Jan 13 22:28:28.382988 update_engine[1818]: E20250113 22:28:28.382920 1818 update_attempter.cc:619] Update failed. Jan 13 22:28:28.382988 update_engine[1818]: I20250113 22:28:28.382934 1818 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 22:28:28.382988 update_engine[1818]: I20250113 22:28:28.382948 1818 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 22:28:28.382988 update_engine[1818]: I20250113 22:28:28.382962 1818 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 22:28:28.383620 update_engine[1818]: I20250113 22:28:28.383106 1818 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 22:28:28.383620 update_engine[1818]: I20250113 22:28:28.383164 1818 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 22:28:28.383620 update_engine[1818]: I20250113 22:28:28.383180 1818 omaha_request_action.cc:272] Request: Jan 13 22:28:28.383620 update_engine[1818]: Jan 13 22:28:28.383620 update_engine[1818]: Jan 13 22:28:28.383620 update_engine[1818]: Jan 13 22:28:28.383620 update_engine[1818]: Jan 13 22:28:28.383620 update_engine[1818]: Jan 13 22:28:28.383620 update_engine[1818]: Jan 13 22:28:28.383620 update_engine[1818]: I20250113 22:28:28.383195 1818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 22:28:28.383620 update_engine[1818]: I20250113 22:28:28.383586 1818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 22:28:28.384402 update_engine[1818]: I20250113 22:28:28.383980 1818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 22:28:28.384517 locksmithd[1852]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 22:28:28.385088 update_engine[1818]: E20250113 22:28:28.384853 1818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 22:28:28.385088 update_engine[1818]: I20250113 22:28:28.384972 1818 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 22:28:28.385088 update_engine[1818]: I20250113 22:28:28.384999 1818 omaha_request_action.cc:617] Omaha request response: Jan 13 22:28:28.385088 update_engine[1818]: I20250113 22:28:28.385015 1818 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 22:28:28.385088 update_engine[1818]: I20250113 22:28:28.385028 1818 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 22:28:28.385088 update_engine[1818]: I20250113 22:28:28.385042 1818 update_attempter.cc:306] Processing Done. Jan 13 22:28:28.385088 update_engine[1818]: I20250113 22:28:28.385057 1818 update_attempter.cc:310] Error event sent. Jan 13 22:28:28.385088 update_engine[1818]: I20250113 22:28:28.385078 1818 update_check_scheduler.cc:74] Next update check in 48m40s Jan 13 22:28:28.385770 locksmithd[1852]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 22:31:39.283152 systemd[1]: Started sshd@9-147.75.202.79:22-139.178.89.65:44516.service - OpenSSH per-connection server daemon (139.178.89.65:44516). Jan 13 22:31:39.312560 sshd[7719]: Accepted publickey for core from 139.178.89.65 port 44516 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:31:39.313472 sshd[7719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:31:39.316840 systemd-logind[1813]: New session 12 of user core. Jan 13 22:31:39.331703 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 22:31:39.462649 sshd[7719]: pam_unix(sshd:session): session closed for user core Jan 13 22:31:39.464259 systemd[1]: sshd@9-147.75.202.79:22-139.178.89.65:44516.service: Deactivated successfully. Jan 13 22:31:39.465192 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 22:31:39.465958 systemd-logind[1813]: Session 12 logged out. Waiting for processes to exit. Jan 13 22:31:39.466516 systemd-logind[1813]: Removed session 12. Jan 13 22:31:44.480672 systemd[1]: Started sshd@10-147.75.202.79:22-139.178.89.65:56920.service - OpenSSH per-connection server daemon (139.178.89.65:56920). Jan 13 22:31:44.511367 sshd[7804]: Accepted publickey for core from 139.178.89.65 port 56920 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:31:44.512133 sshd[7804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:31:44.515179 systemd-logind[1813]: New session 13 of user core. Jan 13 22:31:44.530632 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 22:31:44.616647 sshd[7804]: pam_unix(sshd:session): session closed for user core Jan 13 22:31:44.618265 systemd[1]: sshd@10-147.75.202.79:22-139.178.89.65:56920.service: Deactivated successfully. Jan 13 22:31:44.619199 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 22:31:44.619922 systemd-logind[1813]: Session 13 logged out. Waiting for processes to exit. Jan 13 22:31:44.620400 systemd-logind[1813]: Removed session 13. Jan 13 22:31:49.629736 systemd[1]: Started sshd@11-147.75.202.79:22-139.178.89.65:56932.service - OpenSSH per-connection server daemon (139.178.89.65:56932). Jan 13 22:31:49.661565 sshd[7835]: Accepted publickey for core from 139.178.89.65 port 56932 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:31:49.662195 sshd[7835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:31:49.664744 systemd-logind[1813]: New session 14 of user core. Jan 13 22:31:49.677690 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 22:31:49.760153 sshd[7835]: pam_unix(sshd:session): session closed for user core Jan 13 22:31:49.781232 systemd[1]: sshd@11-147.75.202.79:22-139.178.89.65:56932.service: Deactivated successfully. Jan 13 22:31:49.782094 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 22:31:49.782925 systemd-logind[1813]: Session 14 logged out. Waiting for processes to exit. Jan 13 22:31:49.783600 systemd[1]: Started sshd@12-147.75.202.79:22-139.178.89.65:56946.service - OpenSSH per-connection server daemon (139.178.89.65:56946). Jan 13 22:31:49.784239 systemd-logind[1813]: Removed session 14. Jan 13 22:31:49.812520 sshd[7862]: Accepted publickey for core from 139.178.89.65 port 56946 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:31:49.813351 sshd[7862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:31:49.816509 systemd-logind[1813]: New session 15 of user core. Jan 13 22:31:49.826643 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 22:31:49.935875 sshd[7862]: pam_unix(sshd:session): session closed for user core Jan 13 22:31:49.951432 systemd[1]: sshd@12-147.75.202.79:22-139.178.89.65:56946.service: Deactivated successfully. Jan 13 22:31:49.952378 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 22:31:49.953121 systemd-logind[1813]: Session 15 logged out. Waiting for processes to exit. Jan 13 22:31:49.953822 systemd[1]: Started sshd@13-147.75.202.79:22-139.178.89.65:56950.service - OpenSSH per-connection server daemon (139.178.89.65:56950). Jan 13 22:31:49.954272 systemd-logind[1813]: Removed session 15. Jan 13 22:31:49.983889 sshd[7887]: Accepted publickey for core from 139.178.89.65 port 56950 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:31:49.984837 sshd[7887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:31:49.988367 systemd-logind[1813]: New session 16 of user core. Jan 13 22:31:50.001766 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 22:31:50.162181 sshd[7887]: pam_unix(sshd:session): session closed for user core Jan 13 22:31:50.164648 systemd[1]: sshd@13-147.75.202.79:22-139.178.89.65:56950.service: Deactivated successfully. Jan 13 22:31:50.165748 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 22:31:50.166190 systemd-logind[1813]: Session 16 logged out. Waiting for processes to exit. Jan 13 22:31:50.166958 systemd-logind[1813]: Removed session 16. Jan 13 22:31:55.196699 systemd[1]: Started sshd@14-147.75.202.79:22-139.178.89.65:55278.service - OpenSSH per-connection server daemon (139.178.89.65:55278). Jan 13 22:31:55.222564 sshd[7921]: Accepted publickey for core from 139.178.89.65 port 55278 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:31:55.223293 sshd[7921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:31:55.226141 systemd-logind[1813]: New session 17 of user core. Jan 13 22:31:55.226782 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 22:31:55.312621 sshd[7921]: pam_unix(sshd:session): session closed for user core Jan 13 22:31:55.314229 systemd[1]: sshd@14-147.75.202.79:22-139.178.89.65:55278.service: Deactivated successfully. Jan 13 22:31:55.315170 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 22:31:55.315940 systemd-logind[1813]: Session 17 logged out. Waiting for processes to exit. Jan 13 22:31:55.316519 systemd-logind[1813]: Removed session 17. Jan 13 22:32:00.331953 systemd[1]: Started sshd@15-147.75.202.79:22-139.178.89.65:55288.service - OpenSSH per-connection server daemon (139.178.89.65:55288). Jan 13 22:32:00.363537 sshd[7965]: Accepted publickey for core from 139.178.89.65 port 55288 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:00.364344 sshd[7965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:00.367417 systemd-logind[1813]: New session 18 of user core. Jan 13 22:32:00.383728 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 22:32:00.467848 sshd[7965]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:00.469313 systemd[1]: sshd@15-147.75.202.79:22-139.178.89.65:55288.service: Deactivated successfully. Jan 13 22:32:00.470243 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 22:32:00.470986 systemd-logind[1813]: Session 18 logged out. Waiting for processes to exit. Jan 13 22:32:00.471446 systemd-logind[1813]: Removed session 18. Jan 13 22:32:05.479357 systemd[1]: Started sshd@16-147.75.202.79:22-139.178.89.65:44938.service - OpenSSH per-connection server daemon (139.178.89.65:44938). Jan 13 22:32:05.514647 sshd[7991]: Accepted publickey for core from 139.178.89.65 port 44938 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:05.517877 sshd[7991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:05.522418 systemd-logind[1813]: New session 19 of user core. Jan 13 22:32:05.536932 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 22:32:05.680918 sshd[7991]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:05.683133 systemd[1]: sshd@16-147.75.202.79:22-139.178.89.65:44938.service: Deactivated successfully. Jan 13 22:32:05.684152 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 22:32:05.684631 systemd-logind[1813]: Session 19 logged out. Waiting for processes to exit. Jan 13 22:32:05.685277 systemd-logind[1813]: Removed session 19. Jan 13 22:32:10.698307 systemd[1]: Started sshd@17-147.75.202.79:22-139.178.89.65:44944.service - OpenSSH per-connection server daemon (139.178.89.65:44944). Jan 13 22:32:10.738380 sshd[8017]: Accepted publickey for core from 139.178.89.65 port 44944 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:10.739771 sshd[8017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:10.744394 systemd-logind[1813]: New session 20 of user core. Jan 13 22:32:10.756811 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 22:32:10.906825 sshd[8017]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:10.923158 systemd[1]: sshd@17-147.75.202.79:22-139.178.89.65:44944.service: Deactivated successfully. Jan 13 22:32:10.923984 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 22:32:10.924722 systemd-logind[1813]: Session 20 logged out. Waiting for processes to exit. Jan 13 22:32:10.925362 systemd[1]: Started sshd@18-147.75.202.79:22-139.178.89.65:44954.service - OpenSSH per-connection server daemon (139.178.89.65:44954). Jan 13 22:32:10.925909 systemd-logind[1813]: Removed session 20. Jan 13 22:32:10.953961 sshd[8043]: Accepted publickey for core from 139.178.89.65 port 44954 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:10.957238 sshd[8043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:10.967738 systemd-logind[1813]: New session 21 of user core. Jan 13 22:32:10.978882 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 22:32:11.237329 sshd[8043]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:11.274244 systemd[1]: sshd@18-147.75.202.79:22-139.178.89.65:44954.service: Deactivated successfully. Jan 13 22:32:11.278143 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 22:32:11.281395 systemd-logind[1813]: Session 21 logged out. Waiting for processes to exit. Jan 13 22:32:11.298276 systemd[1]: Started sshd@19-147.75.202.79:22-139.178.89.65:54290.service - OpenSSH per-connection server daemon (139.178.89.65:54290). Jan 13 22:32:11.300727 systemd-logind[1813]: Removed session 21. Jan 13 22:32:11.358724 sshd[8066]: Accepted publickey for core from 139.178.89.65 port 54290 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:11.359795 sshd[8066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:11.363149 systemd-logind[1813]: New session 22 of user core. Jan 13 22:32:11.380708 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 22:32:12.663834 sshd[8066]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:12.680738 systemd[1]: sshd@19-147.75.202.79:22-139.178.89.65:54290.service: Deactivated successfully. Jan 13 22:32:12.685869 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 22:32:12.688925 systemd-logind[1813]: Session 22 logged out. Waiting for processes to exit. Jan 13 22:32:12.706117 systemd[1]: Started sshd@20-147.75.202.79:22-139.178.89.65:54306.service - OpenSSH per-connection server daemon (139.178.89.65:54306). Jan 13 22:32:12.707924 systemd-logind[1813]: Removed session 22. Jan 13 22:32:12.747389 sshd[8145]: Accepted publickey for core from 139.178.89.65 port 54306 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:12.748804 sshd[8145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:12.753363 systemd-logind[1813]: New session 23 of user core. Jan 13 22:32:12.771805 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 22:32:12.957022 sshd[8145]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:12.969410 systemd[1]: sshd@20-147.75.202.79:22-139.178.89.65:54306.service: Deactivated successfully. Jan 13 22:32:12.970422 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 22:32:12.971297 systemd-logind[1813]: Session 23 logged out. Waiting for processes to exit. Jan 13 22:32:12.972137 systemd[1]: Started sshd@21-147.75.202.79:22-139.178.89.65:54316.service - OpenSSH per-connection server daemon (139.178.89.65:54316). Jan 13 22:32:12.972929 systemd-logind[1813]: Removed session 23. Jan 13 22:32:13.004937 sshd[8171]: Accepted publickey for core from 139.178.89.65 port 54316 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:13.006078 sshd[8171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:13.009571 systemd-logind[1813]: New session 24 of user core. Jan 13 22:32:13.021649 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 22:32:13.149379 sshd[8171]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:13.151390 systemd[1]: sshd@21-147.75.202.79:22-139.178.89.65:54316.service: Deactivated successfully. Jan 13 22:32:13.152304 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 22:32:13.152692 systemd-logind[1813]: Session 24 logged out. Waiting for processes to exit. Jan 13 22:32:13.153301 systemd-logind[1813]: Removed session 24. Jan 13 22:32:18.192818 systemd[1]: Started sshd@22-147.75.202.79:22-139.178.89.65:54326.service - OpenSSH per-connection server daemon (139.178.89.65:54326). Jan 13 22:32:18.220656 sshd[8201]: Accepted publickey for core from 139.178.89.65 port 54326 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:18.221728 sshd[8201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:18.225866 systemd-logind[1813]: New session 25 of user core. Jan 13 22:32:18.235740 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 22:32:18.323803 sshd[8201]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:18.325440 systemd[1]: sshd@22-147.75.202.79:22-139.178.89.65:54326.service: Deactivated successfully. Jan 13 22:32:18.326428 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 22:32:18.327220 systemd-logind[1813]: Session 25 logged out. Waiting for processes to exit. Jan 13 22:32:18.327947 systemd-logind[1813]: Removed session 25. Jan 13 22:32:23.340716 systemd[1]: Started sshd@23-147.75.202.79:22-139.178.89.65:43448.service - OpenSSH per-connection server daemon (139.178.89.65:43448). Jan 13 22:32:23.369255 sshd[8229]: Accepted publickey for core from 139.178.89.65 port 43448 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:23.372479 sshd[8229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:23.383083 systemd-logind[1813]: New session 26 of user core. Jan 13 22:32:23.408889 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 22:32:23.500339 sshd[8229]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:23.502290 systemd[1]: sshd@23-147.75.202.79:22-139.178.89.65:43448.service: Deactivated successfully. Jan 13 22:32:23.503479 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 22:32:23.504364 systemd-logind[1813]: Session 26 logged out. Waiting for processes to exit. Jan 13 22:32:23.505146 systemd-logind[1813]: Removed session 26. Jan 13 22:32:28.525105 systemd[1]: Started sshd@24-147.75.202.79:22-139.178.89.65:43454.service - OpenSSH per-connection server daemon (139.178.89.65:43454). Jan 13 22:32:28.553226 sshd[8255]: Accepted publickey for core from 139.178.89.65 port 43454 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:32:28.554110 sshd[8255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:32:28.557159 systemd-logind[1813]: New session 27 of user core. Jan 13 22:32:28.573702 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 22:32:28.661617 sshd[8255]: pam_unix(sshd:session): session closed for user core Jan 13 22:32:28.663634 systemd[1]: sshd@24-147.75.202.79:22-139.178.89.65:43454.service: Deactivated successfully. Jan 13 22:32:28.664538 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 22:32:28.664955 systemd-logind[1813]: Session 27 logged out. Waiting for processes to exit. Jan 13 22:32:28.665452 systemd-logind[1813]: Removed session 27.