Jan 30 15:32:08.001829 kernel: microcode: updated early: 0xde -> 0xfc, date = 2023-07-27 Jan 30 15:32:08.001843 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 15:32:08.001849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:32:08.001855 kernel: BIOS-provided physical RAM map: Jan 30 15:32:08.001858 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 15:32:08.001862 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 15:32:08.001867 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 15:32:08.001871 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 15:32:08.001875 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 15:32:08.001879 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000620bafff] usable Jan 30 15:32:08.001883 kernel: BIOS-e820: [mem 0x00000000620bb000-0x00000000620bbfff] ACPI NVS Jan 30 15:32:08.001888 kernel: BIOS-e820: [mem 0x00000000620bc000-0x00000000620bcfff] reserved Jan 30 15:32:08.001892 kernel: BIOS-e820: [mem 0x00000000620bd000-0x000000006c0c4fff] usable Jan 30 15:32:08.001896 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Jan 30 15:32:08.001901 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Jan 30 15:32:08.001906 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Jan 30 15:32:08.001911 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Jan 30 15:32:08.001916 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Jan 30 15:32:08.001921 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Jan 30 15:32:08.001925 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 15:32:08.001930 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 15:32:08.001934 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 15:32:08.001939 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 15:32:08.001943 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 15:32:08.001948 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Jan 30 15:32:08.001953 kernel: NX (Execute Disable) protection: active Jan 30 15:32:08.001957 kernel: APIC: Static calls initialized Jan 30 15:32:08.001963 kernel: SMBIOS 3.2.1 present. Jan 30 15:32:08.001967 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Jan 30 15:32:08.001972 kernel: tsc: Detected 3400.000 MHz processor Jan 30 15:32:08.001977 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 15:32:08.001982 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:32:08.001987 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:32:08.001991 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Jan 30 15:32:08.001996 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 15:32:08.002001 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:32:08.002006 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Jan 30 15:32:08.002011 kernel: Using GB pages for direct mapping Jan 30 15:32:08.002016 kernel: ACPI: Early table checksum verification disabled Jan 30 15:32:08.002021 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 15:32:08.002028 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 15:32:08.002033 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Jan 30 15:32:08.002038 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 15:32:08.002044 kernel: ACPI: FACS 0x000000006D762F80 000040 Jan 30 15:32:08.002049 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Jan 30 15:32:08.002054 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Jan 30 15:32:08.002058 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 15:32:08.002063 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 15:32:08.002068 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 15:32:08.002073 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 15:32:08.002078 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 15:32:08.002084 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 15:32:08.002089 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002094 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 15:32:08.002098 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 15:32:08.002103 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002108 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002113 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 15:32:08.002118 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 15:32:08.002124 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002129 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002134 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 15:32:08.002139 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 30 15:32:08.002143 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 15:32:08.002148 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 15:32:08.002153 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 15:32:08.002158 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 ?b 01072009 AMI 00010013) Jan 30 15:32:08.002163 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 15:32:08.002169 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 15:32:08.002174 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 15:32:08.002179 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 15:32:08.002184 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 15:32:08.002189 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Jan 30 15:32:08.002194 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Jan 30 15:32:08.002198 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Jan 30 15:32:08.002203 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Jan 30 15:32:08.002208 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Jan 30 15:32:08.002214 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Jan 30 15:32:08.002219 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Jan 30 15:32:08.002224 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Jan 30 15:32:08.002228 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Jan 30 15:32:08.002233 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Jan 30 15:32:08.002238 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Jan 30 15:32:08.002243 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Jan 30 15:32:08.002248 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Jan 30 15:32:08.002253 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Jan 30 15:32:08.002258 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Jan 30 15:32:08.002263 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Jan 30 15:32:08.002268 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Jan 30 15:32:08.002273 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Jan 30 15:32:08.002278 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Jan 30 15:32:08.002283 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Jan 30 15:32:08.002288 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Jan 30 15:32:08.002292 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Jan 30 15:32:08.002297 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Jan 30 15:32:08.002303 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Jan 30 15:32:08.002308 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Jan 30 15:32:08.002313 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Jan 30 15:32:08.002318 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Jan 30 15:32:08.002323 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Jan 30 15:32:08.002327 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Jan 30 15:32:08.002332 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Jan 30 15:32:08.002337 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Jan 30 15:32:08.002342 kernel: No NUMA configuration found Jan 30 15:32:08.002351 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Jan 30 15:32:08.002356 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Jan 30 15:32:08.002361 kernel: Zone ranges: Jan 30 15:32:08.002366 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:32:08.002371 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 15:32:08.002376 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Jan 30 15:32:08.002381 kernel: Movable zone start for each node Jan 30 15:32:08.002385 kernel: Early memory node ranges Jan 30 15:32:08.002390 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 15:32:08.002395 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 15:32:08.002401 kernel: node 0: [mem 0x0000000040400000-0x00000000620bafff] Jan 30 15:32:08.002406 kernel: node 0: [mem 0x00000000620bd000-0x000000006c0c4fff] Jan 30 15:32:08.002411 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Jan 30 15:32:08.002416 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Jan 30 15:32:08.002424 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Jan 30 15:32:08.002431 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Jan 30 15:32:08.002436 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:32:08.002441 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 15:32:08.002448 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 15:32:08.002453 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 15:32:08.002458 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 30 15:32:08.002463 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 30 15:32:08.002469 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Jan 30 15:32:08.002474 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 15:32:08.002479 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 15:32:08.002485 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 15:32:08.002490 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 15:32:08.002496 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 15:32:08.002501 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 15:32:08.002507 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 15:32:08.002512 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 15:32:08.002517 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 15:32:08.002522 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 15:32:08.002527 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 15:32:08.002533 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 15:32:08.002538 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 15:32:08.002544 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 15:32:08.002549 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 15:32:08.002554 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 15:32:08.002560 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 15:32:08.002565 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 15:32:08.002570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:32:08.002575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:32:08.002581 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:32:08.002586 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 15:32:08.002592 kernel: TSC deadline timer available Jan 30 15:32:08.002597 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 15:32:08.002602 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Jan 30 15:32:08.002608 kernel: Booting paravirtualized kernel on bare hardware Jan 30 15:32:08.002613 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:32:08.002619 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 15:32:08.002624 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 15:32:08.002629 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 15:32:08.002634 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 15:32:08.002641 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:32:08.002647 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:32:08.002652 kernel: random: crng init done Jan 30 15:32:08.002657 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 15:32:08.002662 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 15:32:08.002668 kernel: Fallback order for Node 0: 0 Jan 30 15:32:08.002673 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Jan 30 15:32:08.002678 kernel: Policy zone: Normal Jan 30 15:32:08.002684 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:32:08.002689 kernel: software IO TLB: area num 16. Jan 30 15:32:08.002695 kernel: Memory: 32551316K/33281940K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 730364K reserved, 0K cma-reserved) Jan 30 15:32:08.002700 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 15:32:08.002706 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 15:32:08.002711 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:32:08.002716 kernel: Dynamic Preempt: voluntary Jan 30 15:32:08.002721 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:32:08.002727 kernel: rcu: RCU event tracing is enabled. Jan 30 15:32:08.002733 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 15:32:08.002739 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:32:08.002744 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:32:08.002749 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:32:08.002755 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:32:08.002760 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 15:32:08.002765 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 15:32:08.002770 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:32:08.002776 kernel: Console: colour dummy device 80x25 Jan 30 15:32:08.002781 kernel: printk: console [tty0] enabled Jan 30 15:32:08.002787 kernel: printk: console [ttyS1] enabled Jan 30 15:32:08.002792 kernel: ACPI: Core revision 20230628 Jan 30 15:32:08.002798 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 30 15:32:08.002803 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:32:08.002808 kernel: DMAR: Host address width 39 Jan 30 15:32:08.002814 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 30 15:32:08.002819 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 30 15:32:08.002824 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 15:32:08.002829 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 15:32:08.002836 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Jan 30 15:32:08.002841 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Jan 30 15:32:08.002846 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 30 15:32:08.002851 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 15:32:08.002857 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 15:32:08.002862 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 15:32:08.002867 kernel: x2apic enabled Jan 30 15:32:08.002872 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 15:32:08.002878 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 15:32:08.002884 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 15:32:08.002889 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 15:32:08.002895 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 15:32:08.002900 kernel: process: using mwait in idle threads Jan 30 15:32:08.002905 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 15:32:08.002910 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 15:32:08.002916 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:32:08.002921 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 15:32:08.002927 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 15:32:08.002932 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 15:32:08.002938 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:32:08.002943 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 15:32:08.002948 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 15:32:08.002954 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 15:32:08.002959 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 15:32:08.002964 kernel: TAA: Mitigation: TSX disabled Jan 30 15:32:08.002970 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 15:32:08.002976 kernel: SRBDS: Mitigation: Microcode Jan 30 15:32:08.002981 kernel: GDS: Mitigation: Microcode Jan 30 15:32:08.002986 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 15:32:08.002992 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 15:32:08.002997 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 15:32:08.003002 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 15:32:08.003007 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 15:32:08.003013 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 15:32:08.003018 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 15:32:08.003024 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 15:32:08.003030 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 15:32:08.003035 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:32:08.003040 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:32:08.003045 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:32:08.003051 kernel: landlock: Up and running. Jan 30 15:32:08.003056 kernel: SELinux: Initializing. Jan 30 15:32:08.003061 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.003067 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.003073 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 15:32:08.003078 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:32:08.003083 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:32:08.003089 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:32:08.003094 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 15:32:08.003099 kernel: ... version: 4 Jan 30 15:32:08.003105 kernel: ... bit width: 48 Jan 30 15:32:08.003110 kernel: ... generic registers: 4 Jan 30 15:32:08.003115 kernel: ... value mask: 0000ffffffffffff Jan 30 15:32:08.003122 kernel: ... max period: 00007fffffffffff Jan 30 15:32:08.003127 kernel: ... fixed-purpose events: 3 Jan 30 15:32:08.003132 kernel: ... event mask: 000000070000000f Jan 30 15:32:08.003137 kernel: signal: max sigframe size: 2032 Jan 30 15:32:08.003143 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 15:32:08.003148 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:32:08.003153 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:32:08.003158 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 15:32:08.003164 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:32:08.003170 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:32:08.003175 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 15:32:08.003181 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 15:32:08.003186 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 15:32:08.003192 kernel: smpboot: Max logical packages: 1 Jan 30 15:32:08.003197 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 15:32:08.003202 kernel: devtmpfs: initialized Jan 30 15:32:08.003208 kernel: x86/mm: Memory block size: 128MB Jan 30 15:32:08.003213 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x620bb000-0x620bbfff] (4096 bytes) Jan 30 15:32:08.003219 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Jan 30 15:32:08.003225 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:32:08.003230 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 15:32:08.003235 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:32:08.003240 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:32:08.003246 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:32:08.003251 kernel: audit: type=2000 audit(1738251122.112:1): state=initialized audit_enabled=0 res=1 Jan 30 15:32:08.003256 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:32:08.003261 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:32:08.003268 kernel: cpuidle: using governor menu Jan 30 15:32:08.003273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:32:08.003278 kernel: dca service started, version 1.12.1 Jan 30 15:32:08.003283 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 15:32:08.003289 kernel: PCI: Using configuration type 1 for base access Jan 30 15:32:08.003294 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 15:32:08.003299 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:32:08.003305 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 15:32:08.003311 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 15:32:08.003316 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:32:08.003321 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:32:08.003326 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:32:08.003332 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:32:08.003337 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:32:08.003342 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:32:08.003349 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 15:32:08.003355 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003360 kernel: ACPI: SSDT 0xFFFF89BA41CFCC00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 15:32:08.003366 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003371 kernel: ACPI: SSDT 0xFFFF89BA41CE9800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 15:32:08.003377 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003382 kernel: ACPI: SSDT 0xFFFF89BA4024F500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 15:32:08.003387 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003392 kernel: ACPI: SSDT 0xFFFF89BA41CEB800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 15:32:08.003398 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003403 kernel: ACPI: SSDT 0xFFFF89BA40129000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 15:32:08.003408 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003414 kernel: ACPI: SSDT 0xFFFF89BA41CFE400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 15:32:08.003419 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 15:32:08.003425 kernel: ACPI: Interpreter enabled Jan 30 15:32:08.003430 kernel: ACPI: PM: (supports S0 S5) Jan 30 15:32:08.003435 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:32:08.003440 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 15:32:08.003446 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 15:32:08.003451 kernel: HEST: Table parsing has been initialized. Jan 30 15:32:08.003456 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 15:32:08.003462 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:32:08.003468 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:32:08.003473 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 15:32:08.003478 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 15:32:08.003484 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 15:32:08.003489 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 15:32:08.003494 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 15:32:08.003499 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 15:32:08.003505 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 30 15:32:08.003511 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 15:32:08.003516 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 15:32:08.003522 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 15:32:08.003527 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 15:32:08.003532 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 15:32:08.003537 kernel: ACPI: \PIN_: New power resource Jan 30 15:32:08.003543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 15:32:08.003611 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:32:08.003666 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 15:32:08.003714 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 15:32:08.003721 kernel: PCI host bridge to bus 0000:00 Jan 30 15:32:08.003769 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:32:08.003812 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:32:08.003853 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:32:08.003894 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Jan 30 15:32:08.003937 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 15:32:08.003978 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 15:32:08.004035 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 15:32:08.004088 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 15:32:08.004136 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.004187 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 30 15:32:08.004238 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.004288 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 30 15:32:08.004335 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Jan 30 15:32:08.004385 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 30 15:32:08.004432 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 30 15:32:08.004482 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 15:32:08.004529 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Jan 30 15:32:08.004583 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 15:32:08.004631 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Jan 30 15:32:08.004683 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 15:32:08.004731 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Jan 30 15:32:08.004780 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 15:32:08.004837 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 15:32:08.004887 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Jan 30 15:32:08.004934 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Jan 30 15:32:08.004985 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 15:32:08.005032 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 15:32:08.005082 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 15:32:08.005129 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 15:32:08.005182 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 15:32:08.005228 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Jan 30 15:32:08.005275 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 15:32:08.005325 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 15:32:08.005402 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Jan 30 15:32:08.005463 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 15:32:08.005514 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 15:32:08.005564 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Jan 30 15:32:08.005610 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 15:32:08.005662 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 15:32:08.005709 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Jan 30 15:32:08.005758 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Jan 30 15:32:08.005803 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 30 15:32:08.005849 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 30 15:32:08.005895 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 30 15:32:08.005941 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Jan 30 15:32:08.005987 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 15:32:08.006038 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 15:32:08.006086 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006138 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 15:32:08.006185 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006238 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 15:32:08.006285 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006338 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 15:32:08.006391 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006443 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 30 15:32:08.006490 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006543 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 15:32:08.006591 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 15:32:08.006643 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 15:32:08.006697 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 15:32:08.006744 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Jan 30 15:32:08.006791 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 15:32:08.006840 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 15:32:08.006887 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 15:32:08.006934 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 15:32:08.006988 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 15:32:08.007039 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 15:32:08.007088 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Jan 30 15:32:08.007136 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 30 15:32:08.007183 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 15:32:08.007232 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 15:32:08.007284 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 15:32:08.007335 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 15:32:08.007410 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Jan 30 15:32:08.007475 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 30 15:32:08.007523 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 15:32:08.007571 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 15:32:08.007620 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 15:32:08.007666 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 30 15:32:08.007714 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 15:32:08.007763 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 15:32:08.007818 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 15:32:08.007865 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 15:32:08.007914 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Jan 30 15:32:08.007962 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 15:32:08.008010 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Jan 30 15:32:08.008059 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.008108 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 15:32:08.008156 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 15:32:08.008202 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 30 15:32:08.008254 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 30 15:32:08.008302 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 30 15:32:08.008353 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Jan 30 15:32:08.008402 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 15:32:08.008453 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Jan 30 15:32:08.008500 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.008549 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 15:32:08.008596 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 15:32:08.008643 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 30 15:32:08.008690 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 15:32:08.008743 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 15:32:08.008794 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 30 15:32:08.008844 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 15:32:08.008893 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 15:32:08.008940 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 15:32:08.008988 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.009034 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.009087 kernel: pci_bus 0000:08: extended config space not accessible Jan 30 15:32:08.009141 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 15:32:08.009194 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Jan 30 15:32:08.009244 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Jan 30 15:32:08.009293 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 15:32:08.009344 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:32:08.009398 kernel: pci 0000:08:00.0: supports D1 D2 Jan 30 15:32:08.009448 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 15:32:08.009498 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 15:32:08.009546 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.009597 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.009607 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 15:32:08.009613 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 15:32:08.009619 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 15:32:08.009624 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 15:32:08.009630 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 15:32:08.009636 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 15:32:08.009641 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 15:32:08.009648 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 15:32:08.009653 kernel: iommu: Default domain type: Translated Jan 30 15:32:08.009659 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:32:08.009665 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:32:08.009670 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:32:08.009676 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 15:32:08.009681 kernel: e820: reserve RAM buffer [mem 0x620bb000-0x63ffffff] Jan 30 15:32:08.009687 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Jan 30 15:32:08.009692 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Jan 30 15:32:08.009699 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Jan 30 15:32:08.009747 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 30 15:32:08.009798 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 30 15:32:08.009848 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:32:08.009856 kernel: vgaarb: loaded Jan 30 15:32:08.009862 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 15:32:08.009868 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 30 15:32:08.009873 kernel: clocksource: Switched to clocksource tsc-early Jan 30 15:32:08.009879 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:32:08.009886 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:32:08.009892 kernel: pnp: PnP ACPI init Jan 30 15:32:08.009942 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 15:32:08.009990 kernel: pnp 00:02: [dma 0 disabled] Jan 30 15:32:08.010036 kernel: pnp 00:03: [dma 0 disabled] Jan 30 15:32:08.010082 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 15:32:08.010126 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 15:32:08.010172 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 15:32:08.010218 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 15:32:08.010261 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 15:32:08.010303 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 15:32:08.010349 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 15:32:08.010438 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 15:32:08.010485 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 15:32:08.010529 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 15:32:08.010571 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 15:32:08.010618 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 15:32:08.010661 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 15:32:08.010704 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 15:32:08.010746 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 15:32:08.010791 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 15:32:08.010833 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 15:32:08.010876 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 15:32:08.010924 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 15:32:08.010932 kernel: pnp: PnP ACPI: found 10 devices Jan 30 15:32:08.010940 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:32:08.010945 kernel: NET: Registered PF_INET protocol family Jan 30 15:32:08.010952 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:32:08.010958 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 15:32:08.010964 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:32:08.010969 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:32:08.010975 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 15:32:08.010980 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 15:32:08.010986 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.010992 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.010997 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:32:08.011004 kernel: NET: Registered PF_XDP protocol family Jan 30 15:32:08.011052 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Jan 30 15:32:08.011099 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Jan 30 15:32:08.011147 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Jan 30 15:32:08.011194 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 15:32:08.011246 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011295 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011343 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011395 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011442 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 15:32:08.011489 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 30 15:32:08.011536 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 15:32:08.011583 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 15:32:08.011632 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 15:32:08.011680 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 15:32:08.011726 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 30 15:32:08.011773 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 15:32:08.011820 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 15:32:08.011868 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 30 15:32:08.011915 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 15:32:08.011963 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 15:32:08.012013 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.012063 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012109 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 15:32:08.012157 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.012204 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012246 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 15:32:08.012289 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:32:08.012330 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:32:08.012374 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:32:08.012418 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Jan 30 15:32:08.012459 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 15:32:08.012505 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Jan 30 15:32:08.012549 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 15:32:08.012595 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 30 15:32:08.012638 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Jan 30 15:32:08.012687 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 15:32:08.012731 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Jan 30 15:32:08.012778 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 15:32:08.012820 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012866 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 30 15:32:08.012911 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012919 kernel: PCI: CLS 64 bytes, default 64 Jan 30 15:32:08.012926 kernel: DMAR: No ATSR found Jan 30 15:32:08.012932 kernel: DMAR: No SATC found Jan 30 15:32:08.012937 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 30 15:32:08.012943 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 30 15:32:08.012949 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 30 15:32:08.012954 kernel: DMAR: IOMMU feature pasid inconsistent Jan 30 15:32:08.012960 kernel: DMAR: IOMMU feature eafs inconsistent Jan 30 15:32:08.012966 kernel: DMAR: IOMMU feature prs inconsistent Jan 30 15:32:08.012971 kernel: DMAR: IOMMU feature nest inconsistent Jan 30 15:32:08.012977 kernel: DMAR: IOMMU feature mts inconsistent Jan 30 15:32:08.012983 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 30 15:32:08.012989 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 30 15:32:08.012994 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 15:32:08.013000 kernel: DMAR: dmar1: Using Queued invalidation Jan 30 15:32:08.013047 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 30 15:32:08.013095 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 30 15:32:08.013142 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 30 15:32:08.013189 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 30 15:32:08.013238 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 30 15:32:08.013285 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 30 15:32:08.013332 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 30 15:32:08.013381 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 30 15:32:08.013428 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 30 15:32:08.013475 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 30 15:32:08.013521 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 30 15:32:08.013567 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 30 15:32:08.013616 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 30 15:32:08.013663 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 30 15:32:08.013709 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 30 15:32:08.013756 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 30 15:32:08.013804 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 30 15:32:08.013851 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 30 15:32:08.013898 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 30 15:32:08.013945 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 30 15:32:08.013991 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 30 15:32:08.014040 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 30 15:32:08.014087 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 30 15:32:08.014136 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 30 15:32:08.014184 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 30 15:32:08.014232 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 15:32:08.014281 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 30 15:32:08.014328 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 30 15:32:08.014426 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 30 15:32:08.014437 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 15:32:08.014443 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:32:08.014449 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Jan 30 15:32:08.014454 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 30 15:32:08.014460 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 15:32:08.014465 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 15:32:08.014471 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 15:32:08.014477 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 30 15:32:08.014528 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 15:32:08.014538 kernel: Initialise system trusted keyrings Jan 30 15:32:08.014544 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 15:32:08.014549 kernel: Key type asymmetric registered Jan 30 15:32:08.014555 kernel: Asymmetric key parser 'x509' registered Jan 30 15:32:08.014560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:32:08.014566 kernel: io scheduler mq-deadline registered Jan 30 15:32:08.014571 kernel: io scheduler kyber registered Jan 30 15:32:08.014577 kernel: io scheduler bfq registered Jan 30 15:32:08.014625 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 30 15:32:08.014672 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 30 15:32:08.014719 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 30 15:32:08.014765 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 30 15:32:08.014813 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 30 15:32:08.014861 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 30 15:32:08.014908 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 30 15:32:08.014963 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 15:32:08.014972 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 15:32:08.014978 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 15:32:08.014984 kernel: pstore: Using crash dump compression: deflate Jan 30 15:32:08.014989 kernel: pstore: Registered erst as persistent store backend Jan 30 15:32:08.014995 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:32:08.015001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:32:08.015006 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:32:08.015014 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 15:32:08.015060 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 15:32:08.015068 kernel: i8042: PNP: No PS/2 controller found. Jan 30 15:32:08.015111 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 15:32:08.015154 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 15:32:08.015197 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T15:32:06 UTC (1738251126) Jan 30 15:32:08.015239 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 15:32:08.015248 kernel: intel_pstate: Intel P-state driver initializing Jan 30 15:32:08.015255 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 15:32:08.015261 kernel: intel_pstate: HWP enabled Jan 30 15:32:08.015266 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 30 15:32:08.015272 kernel: vesafb: scrolling: redraw Jan 30 15:32:08.015277 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 30 15:32:08.015283 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x0000000058ce1075, using 768k, total 768k Jan 30 15:32:08.015289 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 15:32:08.015294 kernel: fb0: VESA VGA frame buffer device Jan 30 15:32:08.015300 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:32:08.015307 kernel: Segment Routing with IPv6 Jan 30 15:32:08.015312 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:32:08.015318 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:32:08.015323 kernel: Key type dns_resolver registered Jan 30 15:32:08.015329 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 15:32:08.015334 kernel: IPI shorthand broadcast: enabled Jan 30 15:32:08.015340 kernel: sched_clock: Marking stable (1719001103, 1391413258)->(4572722793, -1462308432) Jan 30 15:32:08.015346 kernel: registered taskstats version 1 Jan 30 15:32:08.015353 kernel: Loading compiled-in X.509 certificates Jan 30 15:32:08.015360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 15:32:08.015366 kernel: Key type .fscrypt registered Jan 30 15:32:08.015372 kernel: Key type fscrypt-provisioning registered Jan 30 15:32:08.015377 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:32:08.015383 kernel: ima: No architecture policies found Jan 30 15:32:08.015388 kernel: clk: Disabling unused clocks Jan 30 15:32:08.015394 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 15:32:08.015399 kernel: Write protecting the kernel read-only data: 36864k Jan 30 15:32:08.015405 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 15:32:08.015412 kernel: Run /init as init process Jan 30 15:32:08.015417 kernel: with arguments: Jan 30 15:32:08.015423 kernel: /init Jan 30 15:32:08.015429 kernel: with environment: Jan 30 15:32:08.015434 kernel: HOME=/ Jan 30 15:32:08.015439 kernel: TERM=linux Jan 30 15:32:08.015445 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:32:08.015452 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:32:08.015460 systemd[1]: Detected architecture x86-64. Jan 30 15:32:08.015466 systemd[1]: Running in initrd. Jan 30 15:32:08.015472 systemd[1]: No hostname configured, using default hostname. Jan 30 15:32:08.015477 systemd[1]: Hostname set to . Jan 30 15:32:08.015483 systemd[1]: Initializing machine ID from random generator. Jan 30 15:32:08.015489 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:32:08.015495 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:32:08.015501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:32:08.015508 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:32:08.015514 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:32:08.015520 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:32:08.015526 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:32:08.015532 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:32:08.015538 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:32:08.015545 kernel: tsc: Refined TSC clocksource calibration: 3407.997 MHz Jan 30 15:32:08.015551 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd171fc9, max_idle_ns: 440795303639 ns Jan 30 15:32:08.015556 kernel: clocksource: Switched to clocksource tsc Jan 30 15:32:08.015562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:32:08.015568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:32:08.015574 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:32:08.015580 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:32:08.015586 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:32:08.015591 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:32:08.015598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:32:08.015604 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:32:08.015610 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:32:08.015616 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:32:08.015622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:32:08.015628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:32:08.015633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:32:08.015639 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:32:08.015646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:32:08.015652 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:32:08.015658 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:32:08.015663 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:32:08.015669 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:32:08.015685 systemd-journald[266]: Collecting audit messages is disabled. Jan 30 15:32:08.015700 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:32:08.015707 systemd-journald[266]: Journal started Jan 30 15:32:08.015719 systemd-journald[266]: Runtime Journal (/run/log/journal/c046678fddac42a98b2ee2eab8ad5182) is 8.0M, max 636.6M, 628.6M free. Jan 30 15:32:08.050195 systemd-modules-load[268]: Inserted module 'overlay' Jan 30 15:32:08.059541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:32:08.080246 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:32:08.159587 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:32:08.159604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:32:08.159613 kernel: Bridge firewalling registered Jan 30 15:32:08.140501 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 30 15:32:08.140622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:32:08.170668 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:32:08.187693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:32:08.208725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:08.242823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:32:08.248170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:32:08.267994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:32:08.268409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:32:08.271787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:32:08.273068 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:32:08.273840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:32:08.274937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:32:08.276002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:32:08.279990 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:32:08.285564 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:08.297095 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:32:08.300674 systemd-resolved[297]: Positive Trust Anchors: Jan 30 15:32:08.300683 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:32:08.300717 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:32:08.302963 systemd-resolved[297]: Defaulting to hostname 'linux'. Jan 30 15:32:08.307647 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:32:08.324703 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:32:08.449929 dracut-cmdline[305]: dracut-dracut-053 Jan 30 15:32:08.457570 dracut-cmdline[305]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:32:08.642382 kernel: SCSI subsystem initialized Jan 30 15:32:08.665402 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:32:08.688354 kernel: iscsi: registered transport (tcp) Jan 30 15:32:08.719829 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:32:08.719847 kernel: QLogic iSCSI HBA Driver Jan 30 15:32:08.752807 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:32:08.775663 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:32:08.831265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:32:08.831284 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:32:08.851005 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:32:08.908422 kernel: raid6: avx2x4 gen() 53250 MB/s Jan 30 15:32:08.940426 kernel: raid6: avx2x2 gen() 53882 MB/s Jan 30 15:32:08.976842 kernel: raid6: avx2x1 gen() 45234 MB/s Jan 30 15:32:08.976861 kernel: raid6: using algorithm avx2x2 gen() 53882 MB/s Jan 30 15:32:09.024910 kernel: raid6: .... xor() 31283 MB/s, rmw enabled Jan 30 15:32:09.024927 kernel: raid6: using avx2x2 recovery algorithm Jan 30 15:32:09.066380 kernel: xor: automatically using best checksumming function avx Jan 30 15:32:09.183397 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:32:09.188919 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:32:09.217679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:32:09.224734 systemd-udevd[491]: Using default interface naming scheme 'v255'. Jan 30 15:32:09.229470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:32:09.263547 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:32:09.309657 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Jan 30 15:32:09.327001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:32:09.349611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:32:09.440708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:32:09.473369 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 15:32:09.473396 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 15:32:09.475474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:32:09.562662 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 15:32:09.562688 kernel: ACPI: bus type USB registered Jan 30 15:32:09.562702 kernel: usbcore: registered new interface driver usbfs Jan 30 15:32:09.562716 kernel: usbcore: registered new interface driver hub Jan 30 15:32:09.562729 kernel: usbcore: registered new device driver usb Jan 30 15:32:09.513813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:32:09.593362 kernel: PTP clock support registered Jan 30 15:32:09.593385 kernel: libata version 3.00 loaded. Jan 30 15:32:09.593399 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 15:32:09.593417 kernel: AES CTR mode by8 optimization enabled Jan 30 15:32:09.513913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:10.465447 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 15:32:10.465549 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 15:32:10.465621 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 30 15:32:10.465685 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 15:32:10.465746 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 15:32:10.465806 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 15:32:10.465865 kernel: scsi host0: ahci Jan 30 15:32:10.465927 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 15:32:10.465987 kernel: scsi host1: ahci Jan 30 15:32:10.466047 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 15:32:10.466107 kernel: scsi host2: ahci Jan 30 15:32:10.466165 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 15:32:10.466223 kernel: scsi host3: ahci Jan 30 15:32:10.466280 kernel: hub 1-0:1.0: USB hub found Jan 30 15:32:10.466346 kernel: scsi host4: ahci Jan 30 15:32:10.466409 kernel: hub 1-0:1.0: 16 ports detected Jan 30 15:32:10.466470 kernel: scsi host5: ahci Jan 30 15:32:10.466528 kernel: hub 2-0:1.0: USB hub found Jan 30 15:32:10.466589 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 15:32:10.466598 kernel: scsi host6: ahci Jan 30 15:32:10.466653 kernel: scsi host7: ahci Jan 30 15:32:10.466712 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Jan 30 15:32:10.466720 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Jan 30 15:32:10.466729 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Jan 30 15:32:10.466736 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Jan 30 15:32:10.466743 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Jan 30 15:32:10.466749 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Jan 30 15:32:10.466756 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Jan 30 15:32:10.466763 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Jan 30 15:32:10.466770 kernel: hub 2-0:1.0: 10 ports detected Jan 30 15:32:10.466827 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 15:32:10.466836 kernel: pps pps0: new PPS source ptp0 Jan 30 15:32:10.466896 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 15:32:10.573836 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 30 15:32:10.573913 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.573922 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 15:32:10.573985 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.573993 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d2 Jan 30 15:32:10.574057 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 15:32:10.574066 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 30 15:32:10.574126 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574134 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 15:32:10.574193 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574201 kernel: pps pps1: new PPS source ptp1 Jan 30 15:32:10.574257 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 15:32:10.574265 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 30 15:32:10.574330 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 15:32:10.574339 kernel: hub 1-14:1.0: USB hub found Jan 30 15:32:10.574447 kernel: hub 1-14:1.0: 4 ports detected Jan 30 15:32:10.574506 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 15:32:10.574565 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 15:32:10.574573 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d3 Jan 30 15:32:10.574632 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574640 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 30 15:32:10.574700 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574708 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 15:32:10.574766 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 15:32:10.574774 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Jan 30 15:32:11.223702 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 15:32:11.223713 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 15:32:11.223792 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 15:32:11.223911 kernel: ata2.00: Features: NCQ-prio Jan 30 15:32:11.223921 kernel: ata1.00: Features: NCQ-prio Jan 30 15:32:11.223928 kernel: ata2.00: configured for UDMA/133 Jan 30 15:32:11.223936 kernel: ata1.00: configured for UDMA/133 Jan 30 15:32:11.223943 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 15:32:11.224023 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 15:32:11.224087 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 30 15:32:11.224160 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 15:32:11.224171 kernel: usbcore: registered new interface driver usbhid Jan 30 15:32:11.224178 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 30 15:32:11.224245 kernel: usbhid: USB HID core driver Jan 30 15:32:11.224254 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 15:32:11.224261 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 15:32:11.224268 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 15:32:11.224330 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.224338 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 15:32:11.224408 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 15:32:11.224470 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 15:32:11.224530 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 15:32:11.224599 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 15:32:11.224669 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 15:32:11.224729 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 15:32:11.224788 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.224796 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:32:11.224806 kernel: GPT:9289727 != 937703087 Jan 30 15:32:11.224813 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:32:11.224820 kernel: GPT:9289727 != 937703087 Jan 30 15:32:11.224827 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:32:11.224834 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.224841 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 15:32:11.224899 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 30 15:32:11.224964 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 15:32:11.225024 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 15:32:11.225095 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 15:32:11.225155 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 15:32:11.225164 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 15:32:11.225222 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 15:32:11.225288 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 15:32:11.225358 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 15:32:11.225421 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 15:32:11.225429 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 15:32:11.225486 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 15:32:11.225551 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Jan 30 15:32:11.837571 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (672) Jan 30 15:32:11.837600 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 15:32:11.837818 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (577) Jan 30 15:32:11.837848 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.837878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.837904 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.837929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.837948 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.837970 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.837990 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 15:32:11.838175 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 30 15:32:11.838360 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 15:32:09.583660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:32:09.618952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:32:11.869499 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 30 15:32:11.869580 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 30 15:32:09.619160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:10.561471 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:32:10.623639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:32:10.699595 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:32:11.151572 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:32:11.939429 disk-uuid[709]: Primary Header is updated. Jan 30 15:32:11.939429 disk-uuid[709]: Secondary Entries is updated. Jan 30 15:32:11.939429 disk-uuid[709]: Secondary Header is updated. Jan 30 15:32:11.166158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:32:11.166232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:32:11.203485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:32:11.225600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:11.343145 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 30 15:32:11.354553 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:32:11.371026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 30 15:32:11.386027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 15:32:11.400499 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 15:32:11.411429 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 15:32:11.428460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:32:11.445486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:32:11.476679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:12.525452 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:12.544973 disk-uuid[710]: The operation has completed successfully. Jan 30 15:32:12.553469 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:12.581261 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:32:12.581313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:32:12.602609 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:32:12.648566 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 15:32:12.648635 sh[750]: Success Jan 30 15:32:12.683034 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:32:12.702259 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:32:12.718695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:32:12.761168 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 15:32:12.761186 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:12.782616 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:32:12.801686 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:32:12.819736 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:32:12.857390 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 15:32:12.859879 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:32:12.868768 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:32:12.884570 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:32:12.902772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:32:13.002524 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:13.002538 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:13.002549 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:32:13.002556 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:32:13.002563 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:32:13.026427 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:13.037685 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:32:13.058558 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:32:13.085024 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:32:13.115475 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:32:13.125100 unknown[839]: fetched base config from "system" Jan 30 15:32:13.122893 ignition[839]: Ignition 2.19.0 Jan 30 15:32:13.125104 unknown[839]: fetched user config from "system" Jan 30 15:32:13.122898 ignition[839]: Stage: fetch-offline Jan 30 15:32:13.126048 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:32:13.122920 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:13.126471 systemd-networkd[934]: lo: Link UP Jan 30 15:32:13.122926 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:13.126473 systemd-networkd[934]: lo: Gained carrier Jan 30 15:32:13.122980 ignition[839]: parsed url from cmdline: "" Jan 30 15:32:13.128803 systemd-networkd[934]: Enumeration completed Jan 30 15:32:13.122982 ignition[839]: no config URL provided Jan 30 15:32:13.129722 systemd-networkd[934]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.122984 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:32:13.143608 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:32:13.123007 ignition[839]: parsing config with SHA512: 5602b49db9414e8fb4d7bf652760780e862feb096e32c0b30bd06c2bf23667567130a37fd8150f35e0cae8ffa3a761dad1bfb7dbe85d194ebbef4b33aa2d60d3 Jan 30 15:32:13.157709 systemd-networkd[934]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.125362 ignition[839]: fetch-offline: fetch-offline passed Jan 30 15:32:13.161908 systemd[1]: Reached target network.target - Network. Jan 30 15:32:13.125364 ignition[839]: POST message to Packet Timeline Jan 30 15:32:13.175513 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 15:32:13.125367 ignition[839]: POST Status error: resource requires networking Jan 30 15:32:13.186223 systemd-networkd[934]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.125404 ignition[839]: Ignition finished successfully Jan 30 15:32:13.188613 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:32:13.209269 ignition[947]: Ignition 2.19.0 Jan 30 15:32:13.396514 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 30 15:32:13.392202 systemd-networkd[934]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.209283 ignition[947]: Stage: kargs Jan 30 15:32:13.209636 ignition[947]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:13.209657 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:13.211644 ignition[947]: kargs: kargs passed Jan 30 15:32:13.211654 ignition[947]: POST message to Packet Timeline Jan 30 15:32:13.211679 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:13.213014 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56558->[::1]:53: read: connection refused Jan 30 15:32:13.413836 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 15:32:13.414781 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44571->[::1]:53: read: connection refused Jan 30 15:32:13.621387 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 30 15:32:13.622752 systemd-networkd[934]: eno1: Link UP Jan 30 15:32:13.622890 systemd-networkd[934]: eno2: Link UP Jan 30 15:32:13.623015 systemd-networkd[934]: enp2s0f0np0: Link UP Jan 30 15:32:13.623166 systemd-networkd[934]: enp2s0f0np0: Gained carrier Jan 30 15:32:13.632577 systemd-networkd[934]: enp2s0f1np1: Link UP Jan 30 15:32:13.663542 systemd-networkd[934]: enp2s0f0np0: DHCPv4 address 139.178.70.183/31, gateway 139.178.70.182 acquired from 145.40.83.140 Jan 30 15:32:13.815217 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 15:32:13.816490 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54656->[::1]:53: read: connection refused Jan 30 15:32:14.423032 systemd-networkd[934]: enp2s0f1np1: Gained carrier Jan 30 15:32:14.616726 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 15:32:14.617855 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56038->[::1]:53: read: connection refused Jan 30 15:32:14.678856 systemd-networkd[934]: enp2s0f0np0: Gained IPv6LL Jan 30 15:32:15.574864 systemd-networkd[934]: enp2s0f1np1: Gained IPv6LL Jan 30 15:32:16.219675 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 15:32:16.220781 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50850->[::1]:53: read: connection refused Jan 30 15:32:19.424306 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 15:32:19.965170 ignition[947]: GET result: OK Jan 30 15:32:20.363415 ignition[947]: Ignition finished successfully Jan 30 15:32:20.368103 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:32:20.400574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:32:20.406996 ignition[962]: Ignition 2.19.0 Jan 30 15:32:20.407000 ignition[962]: Stage: disks Jan 30 15:32:20.407110 ignition[962]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:20.407116 ignition[962]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:20.407710 ignition[962]: disks: disks passed Jan 30 15:32:20.407713 ignition[962]: POST message to Packet Timeline Jan 30 15:32:20.407723 ignition[962]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:21.034819 ignition[962]: GET result: OK Jan 30 15:32:21.392886 ignition[962]: Ignition finished successfully Jan 30 15:32:21.395416 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:32:21.411692 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:32:21.429593 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:32:21.450578 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:32:21.471732 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:32:21.492641 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:32:21.512598 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:32:21.549322 systemd-fsck[979]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 15:32:21.560783 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:32:21.590542 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:32:21.687351 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 15:32:21.687716 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:32:21.696834 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:32:21.713583 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:32:21.739109 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:32:21.783445 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (988) Jan 30 15:32:21.783458 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:21.753921 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 15:32:21.863453 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:21.863464 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:32:21.863475 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:32:21.863482 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:32:21.882910 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 15:32:21.893427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:32:21.893445 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:32:21.925795 coreos-metadata[990]: Jan 30 15:32:21.925 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 15:32:21.956761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:32:21.966534 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:32:21.991527 coreos-metadata[1006]: Jan 30 15:32:21.969 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 15:32:21.993623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:32:22.022543 initrd-setup-root[1020]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:32:22.033492 initrd-setup-root[1027]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:32:22.043478 initrd-setup-root[1034]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:32:22.054478 initrd-setup-root[1041]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:32:22.074906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:32:22.102595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:32:22.137547 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:22.122233 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:32:22.146175 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:32:22.169941 ignition[1108]: INFO : Ignition 2.19.0 Jan 30 15:32:22.169941 ignition[1108]: INFO : Stage: mount Jan 30 15:32:22.184453 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:22.184453 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:22.184453 ignition[1108]: INFO : mount: mount passed Jan 30 15:32:22.184453 ignition[1108]: INFO : POST message to Packet Timeline Jan 30 15:32:22.184453 ignition[1108]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:22.179646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:32:22.493092 coreos-metadata[1006]: Jan 30 15:32:22.492 INFO Fetch successful Jan 30 15:32:22.543123 coreos-metadata[990]: Jan 30 15:32:22.543 INFO Fetch successful Jan 30 15:32:22.571018 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 15:32:22.571077 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 15:32:22.614527 coreos-metadata[990]: Jan 30 15:32:22.578 INFO wrote hostname ci-4081.3.0-a-8297fae690 to /sysroot/etc/hostname Jan 30 15:32:22.582668 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 15:32:22.822793 ignition[1108]: INFO : GET result: OK Jan 30 15:32:23.170263 ignition[1108]: INFO : Ignition finished successfully Jan 30 15:32:23.172738 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:32:23.206609 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:32:23.217747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:32:23.262352 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1132) Jan 30 15:32:23.297253 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:23.297269 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:23.314410 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:32:23.351524 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:32:23.351540 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:32:23.364290 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:32:23.385860 ignition[1149]: INFO : Ignition 2.19.0 Jan 30 15:32:23.385860 ignition[1149]: INFO : Stage: files Jan 30 15:32:23.399611 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:23.399611 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:23.399611 ignition[1149]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:32:23.399611 ignition[1149]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 15:32:23.390090 unknown[1149]: wrote ssh authorized keys file for user: core Jan 30 15:32:23.565559 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 15:32:23.904929 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:32:23.904929 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 15:32:24.372400 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 15:32:24.598355 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:24.598355 ignition[1149]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: files passed Jan 30 15:32:24.627657 ignition[1149]: INFO : POST message to Packet Timeline Jan 30 15:32:24.627657 ignition[1149]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:25.378738 ignition[1149]: INFO : GET result: OK Jan 30 15:32:25.761008 ignition[1149]: INFO : Ignition finished successfully Jan 30 15:32:25.763888 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:32:25.803661 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:32:25.804266 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:32:25.822851 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:32:25.822924 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:32:25.869395 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:32:25.885944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:32:25.916554 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:32:25.916554 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:32:25.930568 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:32:25.923626 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:32:25.994542 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:32:25.994873 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:32:26.015545 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:32:26.035702 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:32:26.055831 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:32:26.065752 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:32:26.145755 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:32:26.169729 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:32:26.198325 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:32:26.209831 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:32:26.232043 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:32:26.249969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:32:26.250383 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:32:26.278088 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:32:26.300983 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:32:26.318961 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:32:26.338965 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:32:26.359956 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:32:26.380973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:32:26.402102 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:32:26.422998 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:32:26.443979 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:32:26.465093 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:32:26.482854 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:32:26.483256 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:32:26.518788 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:32:26.528985 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:32:26.549838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:32:26.550303 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:32:26.572986 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:32:26.573412 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:32:26.604937 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:32:26.605413 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:32:26.625174 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:32:26.644830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:32:26.645288 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:32:26.665972 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:32:26.683989 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:32:26.701892 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:32:26.702186 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:32:26.722996 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:32:26.723297 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:32:26.746047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:32:26.746476 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:32:26.767053 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:32:26.873477 ignition[1211]: INFO : Ignition 2.19.0 Jan 30 15:32:26.873477 ignition[1211]: INFO : Stage: umount Jan 30 15:32:26.873477 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:26.873477 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:26.873477 ignition[1211]: INFO : umount: umount passed Jan 30 15:32:26.873477 ignition[1211]: INFO : POST message to Packet Timeline Jan 30 15:32:26.873477 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:26.767454 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:32:26.786039 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 15:32:26.786455 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 15:32:26.815543 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:32:26.845610 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:32:26.864438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:32:26.864530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:32:26.884579 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:32:26.884682 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:32:26.916377 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:32:26.920381 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:32:26.920631 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:32:27.006426 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:32:27.006712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:32:27.463825 ignition[1211]: INFO : GET result: OK Jan 30 15:32:27.817816 ignition[1211]: INFO : Ignition finished successfully Jan 30 15:32:27.820928 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:32:27.821215 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:32:27.838643 systemd[1]: Stopped target network.target - Network. Jan 30 15:32:27.853645 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:32:27.853829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:32:27.871729 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:32:27.871869 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:32:27.889678 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:32:27.889809 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:32:27.907789 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:32:27.907949 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:32:27.925735 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:32:27.925904 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:32:27.945191 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:32:27.954505 systemd-networkd[934]: enp2s0f0np0: DHCPv6 lease lost Jan 30 15:32:27.962561 systemd-networkd[934]: enp2s0f1np1: DHCPv6 lease lost Jan 30 15:32:27.962900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:32:27.981539 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:32:27.981813 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:32:28.000369 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:32:28.000689 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:32:28.020877 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:32:28.020988 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:32:28.059528 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:32:28.067508 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:32:28.067563 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:32:28.089682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:32:28.089751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:32:28.107767 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:32:28.107885 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:32:28.125845 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:32:28.126007 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:32:28.147065 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:32:28.166795 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:32:28.167258 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:32:28.203607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:32:28.203752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:32:28.206973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:32:28.207078 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:32:28.236719 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:32:28.236860 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:32:28.267050 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:32:28.267326 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:32:28.305532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:32:28.305782 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:28.354691 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:32:28.377514 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:32:28.621486 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Jan 30 15:32:28.377743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:32:28.399690 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:32:28.399835 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:32:28.418656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:32:28.418789 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:32:28.440639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:32:28.440774 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:28.462663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:32:28.462877 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:32:28.483179 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:32:28.483424 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:32:28.505309 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:32:28.541848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:32:28.559025 systemd[1]: Switching root. Jan 30 15:32:28.731633 systemd-journald[266]: Journal stopped Jan 30 15:32:08.001829 kernel: microcode: updated early: 0xde -> 0xfc, date = 2023-07-27 Jan 30 15:32:08.001843 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 15:32:08.001849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:32:08.001855 kernel: BIOS-provided physical RAM map: Jan 30 15:32:08.001858 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 15:32:08.001862 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 15:32:08.001867 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 15:32:08.001871 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 15:32:08.001875 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 15:32:08.001879 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000620bafff] usable Jan 30 15:32:08.001883 kernel: BIOS-e820: [mem 0x00000000620bb000-0x00000000620bbfff] ACPI NVS Jan 30 15:32:08.001888 kernel: BIOS-e820: [mem 0x00000000620bc000-0x00000000620bcfff] reserved Jan 30 15:32:08.001892 kernel: BIOS-e820: [mem 0x00000000620bd000-0x000000006c0c4fff] usable Jan 30 15:32:08.001896 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Jan 30 15:32:08.001901 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Jan 30 15:32:08.001906 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Jan 30 15:32:08.001911 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Jan 30 15:32:08.001916 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Jan 30 15:32:08.001921 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Jan 30 15:32:08.001925 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 15:32:08.001930 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 15:32:08.001934 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 15:32:08.001939 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 15:32:08.001943 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 15:32:08.001948 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Jan 30 15:32:08.001953 kernel: NX (Execute Disable) protection: active Jan 30 15:32:08.001957 kernel: APIC: Static calls initialized Jan 30 15:32:08.001963 kernel: SMBIOS 3.2.1 present. Jan 30 15:32:08.001967 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Jan 30 15:32:08.001972 kernel: tsc: Detected 3400.000 MHz processor Jan 30 15:32:08.001977 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 15:32:08.001982 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:32:08.001987 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:32:08.001991 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Jan 30 15:32:08.001996 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 15:32:08.002001 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:32:08.002006 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Jan 30 15:32:08.002011 kernel: Using GB pages for direct mapping Jan 30 15:32:08.002016 kernel: ACPI: Early table checksum verification disabled Jan 30 15:32:08.002021 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 15:32:08.002028 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 15:32:08.002033 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Jan 30 15:32:08.002038 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 15:32:08.002044 kernel: ACPI: FACS 0x000000006D762F80 000040 Jan 30 15:32:08.002049 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Jan 30 15:32:08.002054 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Jan 30 15:32:08.002058 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 15:32:08.002063 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 15:32:08.002068 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 15:32:08.002073 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 15:32:08.002078 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 15:32:08.002084 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 15:32:08.002089 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002094 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 15:32:08.002098 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 15:32:08.002103 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002108 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002113 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 15:32:08.002118 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 15:32:08.002124 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002129 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 15:32:08.002134 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 15:32:08.002139 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 30 15:32:08.002143 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 15:32:08.002148 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 15:32:08.002153 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 15:32:08.002158 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 ?b 01072009 AMI 00010013) Jan 30 15:32:08.002163 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 15:32:08.002169 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 15:32:08.002174 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 15:32:08.002179 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 15:32:08.002184 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 15:32:08.002189 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Jan 30 15:32:08.002194 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Jan 30 15:32:08.002198 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Jan 30 15:32:08.002203 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Jan 30 15:32:08.002208 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Jan 30 15:32:08.002214 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Jan 30 15:32:08.002219 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Jan 30 15:32:08.002224 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Jan 30 15:32:08.002228 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Jan 30 15:32:08.002233 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Jan 30 15:32:08.002238 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Jan 30 15:32:08.002243 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Jan 30 15:32:08.002248 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Jan 30 15:32:08.002253 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Jan 30 15:32:08.002258 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Jan 30 15:32:08.002263 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Jan 30 15:32:08.002268 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Jan 30 15:32:08.002273 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Jan 30 15:32:08.002278 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Jan 30 15:32:08.002283 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Jan 30 15:32:08.002288 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Jan 30 15:32:08.002292 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Jan 30 15:32:08.002297 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Jan 30 15:32:08.002303 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Jan 30 15:32:08.002308 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Jan 30 15:32:08.002313 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Jan 30 15:32:08.002318 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Jan 30 15:32:08.002323 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Jan 30 15:32:08.002327 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Jan 30 15:32:08.002332 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Jan 30 15:32:08.002337 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Jan 30 15:32:08.002342 kernel: No NUMA configuration found Jan 30 15:32:08.002351 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Jan 30 15:32:08.002356 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Jan 30 15:32:08.002361 kernel: Zone ranges: Jan 30 15:32:08.002366 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:32:08.002371 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 15:32:08.002376 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Jan 30 15:32:08.002381 kernel: Movable zone start for each node Jan 30 15:32:08.002385 kernel: Early memory node ranges Jan 30 15:32:08.002390 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 15:32:08.002395 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 15:32:08.002401 kernel: node 0: [mem 0x0000000040400000-0x00000000620bafff] Jan 30 15:32:08.002406 kernel: node 0: [mem 0x00000000620bd000-0x000000006c0c4fff] Jan 30 15:32:08.002411 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Jan 30 15:32:08.002416 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Jan 30 15:32:08.002424 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Jan 30 15:32:08.002431 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Jan 30 15:32:08.002436 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:32:08.002441 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 15:32:08.002448 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 15:32:08.002453 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 15:32:08.002458 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 30 15:32:08.002463 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 30 15:32:08.002469 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Jan 30 15:32:08.002474 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 15:32:08.002479 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 15:32:08.002485 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 15:32:08.002490 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 15:32:08.002496 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 15:32:08.002501 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 15:32:08.002507 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 15:32:08.002512 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 15:32:08.002517 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 15:32:08.002522 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 15:32:08.002527 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 15:32:08.002533 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 15:32:08.002538 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 15:32:08.002544 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 15:32:08.002549 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 15:32:08.002554 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 15:32:08.002560 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 15:32:08.002565 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 15:32:08.002570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:32:08.002575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:32:08.002581 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:32:08.002586 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 15:32:08.002592 kernel: TSC deadline timer available Jan 30 15:32:08.002597 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 15:32:08.002602 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Jan 30 15:32:08.002608 kernel: Booting paravirtualized kernel on bare hardware Jan 30 15:32:08.002613 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:32:08.002619 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 15:32:08.002624 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 15:32:08.002629 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 15:32:08.002634 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 15:32:08.002641 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:32:08.002647 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:32:08.002652 kernel: random: crng init done Jan 30 15:32:08.002657 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 15:32:08.002662 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 15:32:08.002668 kernel: Fallback order for Node 0: 0 Jan 30 15:32:08.002673 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Jan 30 15:32:08.002678 kernel: Policy zone: Normal Jan 30 15:32:08.002684 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:32:08.002689 kernel: software IO TLB: area num 16. Jan 30 15:32:08.002695 kernel: Memory: 32551316K/33281940K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 730364K reserved, 0K cma-reserved) Jan 30 15:32:08.002700 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 15:32:08.002706 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 15:32:08.002711 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:32:08.002716 kernel: Dynamic Preempt: voluntary Jan 30 15:32:08.002721 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:32:08.002727 kernel: rcu: RCU event tracing is enabled. Jan 30 15:32:08.002733 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 15:32:08.002739 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:32:08.002744 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:32:08.002749 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:32:08.002755 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:32:08.002760 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 15:32:08.002765 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 15:32:08.002770 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:32:08.002776 kernel: Console: colour dummy device 80x25 Jan 30 15:32:08.002781 kernel: printk: console [tty0] enabled Jan 30 15:32:08.002787 kernel: printk: console [ttyS1] enabled Jan 30 15:32:08.002792 kernel: ACPI: Core revision 20230628 Jan 30 15:32:08.002798 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 30 15:32:08.002803 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:32:08.002808 kernel: DMAR: Host address width 39 Jan 30 15:32:08.002814 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 30 15:32:08.002819 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 30 15:32:08.002824 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 15:32:08.002829 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 15:32:08.002836 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Jan 30 15:32:08.002841 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Jan 30 15:32:08.002846 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 30 15:32:08.002851 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 15:32:08.002857 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 15:32:08.002862 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 15:32:08.002867 kernel: x2apic enabled Jan 30 15:32:08.002872 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 15:32:08.002878 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 15:32:08.002884 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 15:32:08.002889 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 15:32:08.002895 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 15:32:08.002900 kernel: process: using mwait in idle threads Jan 30 15:32:08.002905 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 15:32:08.002910 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 15:32:08.002916 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:32:08.002921 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 15:32:08.002927 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 15:32:08.002932 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 15:32:08.002938 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:32:08.002943 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 15:32:08.002948 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 15:32:08.002954 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 15:32:08.002959 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 15:32:08.002964 kernel: TAA: Mitigation: TSX disabled Jan 30 15:32:08.002970 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 15:32:08.002976 kernel: SRBDS: Mitigation: Microcode Jan 30 15:32:08.002981 kernel: GDS: Mitigation: Microcode Jan 30 15:32:08.002986 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 15:32:08.002992 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 15:32:08.002997 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 15:32:08.003002 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 15:32:08.003007 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 15:32:08.003013 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 15:32:08.003018 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 15:32:08.003024 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 15:32:08.003030 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 15:32:08.003035 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:32:08.003040 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:32:08.003045 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:32:08.003051 kernel: landlock: Up and running. Jan 30 15:32:08.003056 kernel: SELinux: Initializing. Jan 30 15:32:08.003061 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.003067 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.003073 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 15:32:08.003078 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:32:08.003083 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:32:08.003089 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 15:32:08.003094 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 15:32:08.003099 kernel: ... version: 4 Jan 30 15:32:08.003105 kernel: ... bit width: 48 Jan 30 15:32:08.003110 kernel: ... generic registers: 4 Jan 30 15:32:08.003115 kernel: ... value mask: 0000ffffffffffff Jan 30 15:32:08.003122 kernel: ... max period: 00007fffffffffff Jan 30 15:32:08.003127 kernel: ... fixed-purpose events: 3 Jan 30 15:32:08.003132 kernel: ... event mask: 000000070000000f Jan 30 15:32:08.003137 kernel: signal: max sigframe size: 2032 Jan 30 15:32:08.003143 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 15:32:08.003148 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:32:08.003153 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:32:08.003158 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 15:32:08.003164 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:32:08.003170 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:32:08.003175 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 15:32:08.003181 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 15:32:08.003186 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 15:32:08.003192 kernel: smpboot: Max logical packages: 1 Jan 30 15:32:08.003197 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 15:32:08.003202 kernel: devtmpfs: initialized Jan 30 15:32:08.003208 kernel: x86/mm: Memory block size: 128MB Jan 30 15:32:08.003213 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x620bb000-0x620bbfff] (4096 bytes) Jan 30 15:32:08.003219 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Jan 30 15:32:08.003225 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:32:08.003230 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 15:32:08.003235 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:32:08.003240 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:32:08.003246 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:32:08.003251 kernel: audit: type=2000 audit(1738251122.112:1): state=initialized audit_enabled=0 res=1 Jan 30 15:32:08.003256 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:32:08.003261 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:32:08.003268 kernel: cpuidle: using governor menu Jan 30 15:32:08.003273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:32:08.003278 kernel: dca service started, version 1.12.1 Jan 30 15:32:08.003283 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 15:32:08.003289 kernel: PCI: Using configuration type 1 for base access Jan 30 15:32:08.003294 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 15:32:08.003299 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:32:08.003305 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 15:32:08.003311 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 15:32:08.003316 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:32:08.003321 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:32:08.003326 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:32:08.003332 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:32:08.003337 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:32:08.003342 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:32:08.003349 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 15:32:08.003355 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003360 kernel: ACPI: SSDT 0xFFFF89BA41CFCC00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 15:32:08.003366 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003371 kernel: ACPI: SSDT 0xFFFF89BA41CE9800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 15:32:08.003377 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003382 kernel: ACPI: SSDT 0xFFFF89BA4024F500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 15:32:08.003387 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003392 kernel: ACPI: SSDT 0xFFFF89BA41CEB800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 15:32:08.003398 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003403 kernel: ACPI: SSDT 0xFFFF89BA40129000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 15:32:08.003408 kernel: ACPI: Dynamic OEM Table Load: Jan 30 15:32:08.003414 kernel: ACPI: SSDT 0xFFFF89BA41CFE400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 15:32:08.003419 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 15:32:08.003425 kernel: ACPI: Interpreter enabled Jan 30 15:32:08.003430 kernel: ACPI: PM: (supports S0 S5) Jan 30 15:32:08.003435 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:32:08.003440 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 15:32:08.003446 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 15:32:08.003451 kernel: HEST: Table parsing has been initialized. Jan 30 15:32:08.003456 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 15:32:08.003462 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:32:08.003468 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:32:08.003473 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 15:32:08.003478 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 15:32:08.003484 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 15:32:08.003489 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 15:32:08.003494 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 15:32:08.003499 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 15:32:08.003505 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 30 15:32:08.003511 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 15:32:08.003516 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 15:32:08.003522 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 15:32:08.003527 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 15:32:08.003532 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 15:32:08.003537 kernel: ACPI: \PIN_: New power resource Jan 30 15:32:08.003543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 15:32:08.003611 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:32:08.003666 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 15:32:08.003714 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 15:32:08.003721 kernel: PCI host bridge to bus 0000:00 Jan 30 15:32:08.003769 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:32:08.003812 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:32:08.003853 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:32:08.003894 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Jan 30 15:32:08.003937 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 15:32:08.003978 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 15:32:08.004035 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 15:32:08.004088 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 15:32:08.004136 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.004187 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 30 15:32:08.004238 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.004288 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 30 15:32:08.004335 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Jan 30 15:32:08.004385 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 30 15:32:08.004432 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 30 15:32:08.004482 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 15:32:08.004529 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Jan 30 15:32:08.004583 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 15:32:08.004631 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Jan 30 15:32:08.004683 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 15:32:08.004731 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Jan 30 15:32:08.004780 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 15:32:08.004837 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 15:32:08.004887 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Jan 30 15:32:08.004934 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Jan 30 15:32:08.004985 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 15:32:08.005032 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 15:32:08.005082 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 15:32:08.005129 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 15:32:08.005182 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 15:32:08.005228 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Jan 30 15:32:08.005275 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 15:32:08.005325 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 15:32:08.005402 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Jan 30 15:32:08.005463 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 15:32:08.005514 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 15:32:08.005564 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Jan 30 15:32:08.005610 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 15:32:08.005662 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 15:32:08.005709 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Jan 30 15:32:08.005758 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Jan 30 15:32:08.005803 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 30 15:32:08.005849 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 30 15:32:08.005895 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 30 15:32:08.005941 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Jan 30 15:32:08.005987 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 15:32:08.006038 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 15:32:08.006086 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006138 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 15:32:08.006185 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006238 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 15:32:08.006285 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006338 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 15:32:08.006391 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006443 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 30 15:32:08.006490 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.006543 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 15:32:08.006591 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 15:32:08.006643 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 15:32:08.006697 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 15:32:08.006744 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Jan 30 15:32:08.006791 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 15:32:08.006840 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 15:32:08.006887 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 15:32:08.006934 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 15:32:08.006988 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 15:32:08.007039 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 15:32:08.007088 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Jan 30 15:32:08.007136 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 30 15:32:08.007183 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 15:32:08.007232 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 15:32:08.007284 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 15:32:08.007335 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 15:32:08.007410 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Jan 30 15:32:08.007475 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 30 15:32:08.007523 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 15:32:08.007571 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 15:32:08.007620 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 15:32:08.007666 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 30 15:32:08.007714 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 15:32:08.007763 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 15:32:08.007818 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 15:32:08.007865 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 15:32:08.007914 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Jan 30 15:32:08.007962 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 15:32:08.008010 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Jan 30 15:32:08.008059 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.008108 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 15:32:08.008156 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 15:32:08.008202 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 30 15:32:08.008254 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 30 15:32:08.008302 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 30 15:32:08.008353 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Jan 30 15:32:08.008402 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 15:32:08.008453 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Jan 30 15:32:08.008500 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 30 15:32:08.008549 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 15:32:08.008596 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 15:32:08.008643 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 30 15:32:08.008690 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 15:32:08.008743 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 15:32:08.008794 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 30 15:32:08.008844 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 15:32:08.008893 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 15:32:08.008940 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 15:32:08.008988 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.009034 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.009087 kernel: pci_bus 0000:08: extended config space not accessible Jan 30 15:32:08.009141 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 15:32:08.009194 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Jan 30 15:32:08.009244 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Jan 30 15:32:08.009293 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 15:32:08.009344 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:32:08.009398 kernel: pci 0000:08:00.0: supports D1 D2 Jan 30 15:32:08.009448 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 15:32:08.009498 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 15:32:08.009546 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.009597 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.009607 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 15:32:08.009613 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 15:32:08.009619 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 15:32:08.009624 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 15:32:08.009630 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 15:32:08.009636 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 15:32:08.009641 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 15:32:08.009648 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 15:32:08.009653 kernel: iommu: Default domain type: Translated Jan 30 15:32:08.009659 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:32:08.009665 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:32:08.009670 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:32:08.009676 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 15:32:08.009681 kernel: e820: reserve RAM buffer [mem 0x620bb000-0x63ffffff] Jan 30 15:32:08.009687 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Jan 30 15:32:08.009692 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Jan 30 15:32:08.009699 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Jan 30 15:32:08.009747 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 30 15:32:08.009798 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 30 15:32:08.009848 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:32:08.009856 kernel: vgaarb: loaded Jan 30 15:32:08.009862 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 15:32:08.009868 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 30 15:32:08.009873 kernel: clocksource: Switched to clocksource tsc-early Jan 30 15:32:08.009879 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:32:08.009886 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:32:08.009892 kernel: pnp: PnP ACPI init Jan 30 15:32:08.009942 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 15:32:08.009990 kernel: pnp 00:02: [dma 0 disabled] Jan 30 15:32:08.010036 kernel: pnp 00:03: [dma 0 disabled] Jan 30 15:32:08.010082 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 15:32:08.010126 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 15:32:08.010172 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 15:32:08.010218 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 15:32:08.010261 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 15:32:08.010303 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 15:32:08.010349 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 15:32:08.010438 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 15:32:08.010485 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 15:32:08.010529 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 15:32:08.010571 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 15:32:08.010618 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 15:32:08.010661 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 15:32:08.010704 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 15:32:08.010746 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 15:32:08.010791 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 15:32:08.010833 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 15:32:08.010876 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 15:32:08.010924 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 15:32:08.010932 kernel: pnp: PnP ACPI: found 10 devices Jan 30 15:32:08.010940 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:32:08.010945 kernel: NET: Registered PF_INET protocol family Jan 30 15:32:08.010952 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:32:08.010958 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 15:32:08.010964 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:32:08.010969 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:32:08.010975 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 15:32:08.010980 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 15:32:08.010986 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.010992 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 15:32:08.010997 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:32:08.011004 kernel: NET: Registered PF_XDP protocol family Jan 30 15:32:08.011052 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Jan 30 15:32:08.011099 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Jan 30 15:32:08.011147 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Jan 30 15:32:08.011194 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 15:32:08.011246 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011295 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011343 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011395 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 15:32:08.011442 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 30 15:32:08.011489 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Jan 30 15:32:08.011536 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 15:32:08.011583 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 30 15:32:08.011632 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 30 15:32:08.011680 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 15:32:08.011726 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Jan 30 15:32:08.011773 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 30 15:32:08.011820 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 15:32:08.011868 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Jan 30 15:32:08.011915 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 30 15:32:08.011963 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 30 15:32:08.012013 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.012063 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012109 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 30 15:32:08.012157 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 30 15:32:08.012204 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012246 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 15:32:08.012289 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:32:08.012330 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:32:08.012374 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:32:08.012418 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Jan 30 15:32:08.012459 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 15:32:08.012505 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Jan 30 15:32:08.012549 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 15:32:08.012595 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 30 15:32:08.012638 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Jan 30 15:32:08.012687 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 15:32:08.012731 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Jan 30 15:32:08.012778 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 15:32:08.012820 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012866 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 30 15:32:08.012911 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Jan 30 15:32:08.012919 kernel: PCI: CLS 64 bytes, default 64 Jan 30 15:32:08.012926 kernel: DMAR: No ATSR found Jan 30 15:32:08.012932 kernel: DMAR: No SATC found Jan 30 15:32:08.012937 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 30 15:32:08.012943 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 30 15:32:08.012949 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 30 15:32:08.012954 kernel: DMAR: IOMMU feature pasid inconsistent Jan 30 15:32:08.012960 kernel: DMAR: IOMMU feature eafs inconsistent Jan 30 15:32:08.012966 kernel: DMAR: IOMMU feature prs inconsistent Jan 30 15:32:08.012971 kernel: DMAR: IOMMU feature nest inconsistent Jan 30 15:32:08.012977 kernel: DMAR: IOMMU feature mts inconsistent Jan 30 15:32:08.012983 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 30 15:32:08.012989 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 30 15:32:08.012994 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 15:32:08.013000 kernel: DMAR: dmar1: Using Queued invalidation Jan 30 15:32:08.013047 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 30 15:32:08.013095 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 30 15:32:08.013142 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 30 15:32:08.013189 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 30 15:32:08.013238 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 30 15:32:08.013285 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 30 15:32:08.013332 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 30 15:32:08.013381 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 30 15:32:08.013428 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 30 15:32:08.013475 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 30 15:32:08.013521 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 30 15:32:08.013567 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 30 15:32:08.013616 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 30 15:32:08.013663 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 30 15:32:08.013709 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 30 15:32:08.013756 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 30 15:32:08.013804 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 30 15:32:08.013851 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 30 15:32:08.013898 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 30 15:32:08.013945 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 30 15:32:08.013991 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 30 15:32:08.014040 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 30 15:32:08.014087 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 30 15:32:08.014136 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 30 15:32:08.014184 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 30 15:32:08.014232 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 15:32:08.014281 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 30 15:32:08.014328 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 30 15:32:08.014426 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 30 15:32:08.014437 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 15:32:08.014443 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:32:08.014449 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Jan 30 15:32:08.014454 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 30 15:32:08.014460 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 15:32:08.014465 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 15:32:08.014471 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 15:32:08.014477 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 30 15:32:08.014528 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 15:32:08.014538 kernel: Initialise system trusted keyrings Jan 30 15:32:08.014544 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 15:32:08.014549 kernel: Key type asymmetric registered Jan 30 15:32:08.014555 kernel: Asymmetric key parser 'x509' registered Jan 30 15:32:08.014560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:32:08.014566 kernel: io scheduler mq-deadline registered Jan 30 15:32:08.014571 kernel: io scheduler kyber registered Jan 30 15:32:08.014577 kernel: io scheduler bfq registered Jan 30 15:32:08.014625 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 30 15:32:08.014672 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 30 15:32:08.014719 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 30 15:32:08.014765 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 30 15:32:08.014813 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 30 15:32:08.014861 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 30 15:32:08.014908 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 30 15:32:08.014963 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 15:32:08.014972 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 15:32:08.014978 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 15:32:08.014984 kernel: pstore: Using crash dump compression: deflate Jan 30 15:32:08.014989 kernel: pstore: Registered erst as persistent store backend Jan 30 15:32:08.014995 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:32:08.015001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:32:08.015006 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:32:08.015014 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 15:32:08.015060 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 15:32:08.015068 kernel: i8042: PNP: No PS/2 controller found. Jan 30 15:32:08.015111 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 15:32:08.015154 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 15:32:08.015197 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T15:32:06 UTC (1738251126) Jan 30 15:32:08.015239 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 15:32:08.015248 kernel: intel_pstate: Intel P-state driver initializing Jan 30 15:32:08.015255 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 15:32:08.015261 kernel: intel_pstate: HWP enabled Jan 30 15:32:08.015266 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 30 15:32:08.015272 kernel: vesafb: scrolling: redraw Jan 30 15:32:08.015277 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 30 15:32:08.015283 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x0000000058ce1075, using 768k, total 768k Jan 30 15:32:08.015289 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 15:32:08.015294 kernel: fb0: VESA VGA frame buffer device Jan 30 15:32:08.015300 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:32:08.015307 kernel: Segment Routing with IPv6 Jan 30 15:32:08.015312 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:32:08.015318 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:32:08.015323 kernel: Key type dns_resolver registered Jan 30 15:32:08.015329 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 15:32:08.015334 kernel: IPI shorthand broadcast: enabled Jan 30 15:32:08.015340 kernel: sched_clock: Marking stable (1719001103, 1391413258)->(4572722793, -1462308432) Jan 30 15:32:08.015346 kernel: registered taskstats version 1 Jan 30 15:32:08.015353 kernel: Loading compiled-in X.509 certificates Jan 30 15:32:08.015360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 15:32:08.015366 kernel: Key type .fscrypt registered Jan 30 15:32:08.015372 kernel: Key type fscrypt-provisioning registered Jan 30 15:32:08.015377 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:32:08.015383 kernel: ima: No architecture policies found Jan 30 15:32:08.015388 kernel: clk: Disabling unused clocks Jan 30 15:32:08.015394 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 15:32:08.015399 kernel: Write protecting the kernel read-only data: 36864k Jan 30 15:32:08.015405 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 15:32:08.015412 kernel: Run /init as init process Jan 30 15:32:08.015417 kernel: with arguments: Jan 30 15:32:08.015423 kernel: /init Jan 30 15:32:08.015429 kernel: with environment: Jan 30 15:32:08.015434 kernel: HOME=/ Jan 30 15:32:08.015439 kernel: TERM=linux Jan 30 15:32:08.015445 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:32:08.015452 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:32:08.015460 systemd[1]: Detected architecture x86-64. Jan 30 15:32:08.015466 systemd[1]: Running in initrd. Jan 30 15:32:08.015472 systemd[1]: No hostname configured, using default hostname. Jan 30 15:32:08.015477 systemd[1]: Hostname set to . Jan 30 15:32:08.015483 systemd[1]: Initializing machine ID from random generator. Jan 30 15:32:08.015489 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:32:08.015495 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:32:08.015501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:32:08.015508 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:32:08.015514 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:32:08.015520 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:32:08.015526 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:32:08.015532 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:32:08.015538 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:32:08.015545 kernel: tsc: Refined TSC clocksource calibration: 3407.997 MHz Jan 30 15:32:08.015551 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd171fc9, max_idle_ns: 440795303639 ns Jan 30 15:32:08.015556 kernel: clocksource: Switched to clocksource tsc Jan 30 15:32:08.015562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:32:08.015568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:32:08.015574 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:32:08.015580 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:32:08.015586 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:32:08.015591 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:32:08.015598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:32:08.015604 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:32:08.015610 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:32:08.015616 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:32:08.015622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:32:08.015628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:32:08.015633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:32:08.015639 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:32:08.015646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:32:08.015652 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:32:08.015658 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:32:08.015663 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:32:08.015669 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:32:08.015685 systemd-journald[266]: Collecting audit messages is disabled. Jan 30 15:32:08.015700 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:32:08.015707 systemd-journald[266]: Journal started Jan 30 15:32:08.015719 systemd-journald[266]: Runtime Journal (/run/log/journal/c046678fddac42a98b2ee2eab8ad5182) is 8.0M, max 636.6M, 628.6M free. Jan 30 15:32:08.050195 systemd-modules-load[268]: Inserted module 'overlay' Jan 30 15:32:08.059541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:32:08.080246 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:32:08.159587 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:32:08.159604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:32:08.159613 kernel: Bridge firewalling registered Jan 30 15:32:08.140501 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 30 15:32:08.140622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:32:08.170668 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:32:08.187693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:32:08.208725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:08.242823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:32:08.248170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:32:08.267994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:32:08.268409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:32:08.271787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:32:08.273068 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:32:08.273840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:32:08.274937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:32:08.276002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:32:08.279990 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:32:08.285564 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:08.297095 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:32:08.300674 systemd-resolved[297]: Positive Trust Anchors: Jan 30 15:32:08.300683 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:32:08.300717 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:32:08.302963 systemd-resolved[297]: Defaulting to hostname 'linux'. Jan 30 15:32:08.307647 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:32:08.324703 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:32:08.449929 dracut-cmdline[305]: dracut-dracut-053 Jan 30 15:32:08.457570 dracut-cmdline[305]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:32:08.642382 kernel: SCSI subsystem initialized Jan 30 15:32:08.665402 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:32:08.688354 kernel: iscsi: registered transport (tcp) Jan 30 15:32:08.719829 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:32:08.719847 kernel: QLogic iSCSI HBA Driver Jan 30 15:32:08.752807 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:32:08.775663 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:32:08.831265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:32:08.831284 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:32:08.851005 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:32:08.908422 kernel: raid6: avx2x4 gen() 53250 MB/s Jan 30 15:32:08.940426 kernel: raid6: avx2x2 gen() 53882 MB/s Jan 30 15:32:08.976842 kernel: raid6: avx2x1 gen() 45234 MB/s Jan 30 15:32:08.976861 kernel: raid6: using algorithm avx2x2 gen() 53882 MB/s Jan 30 15:32:09.024910 kernel: raid6: .... xor() 31283 MB/s, rmw enabled Jan 30 15:32:09.024927 kernel: raid6: using avx2x2 recovery algorithm Jan 30 15:32:09.066380 kernel: xor: automatically using best checksumming function avx Jan 30 15:32:09.183397 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:32:09.188919 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:32:09.217679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:32:09.224734 systemd-udevd[491]: Using default interface naming scheme 'v255'. Jan 30 15:32:09.229470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:32:09.263547 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:32:09.309657 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Jan 30 15:32:09.327001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:32:09.349611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:32:09.440708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:32:09.473369 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 15:32:09.473396 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 15:32:09.475474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:32:09.562662 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 15:32:09.562688 kernel: ACPI: bus type USB registered Jan 30 15:32:09.562702 kernel: usbcore: registered new interface driver usbfs Jan 30 15:32:09.562716 kernel: usbcore: registered new interface driver hub Jan 30 15:32:09.562729 kernel: usbcore: registered new device driver usb Jan 30 15:32:09.513813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:32:09.593362 kernel: PTP clock support registered Jan 30 15:32:09.593385 kernel: libata version 3.00 loaded. Jan 30 15:32:09.593399 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 15:32:09.593417 kernel: AES CTR mode by8 optimization enabled Jan 30 15:32:09.513913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:10.465447 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 15:32:10.465549 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 15:32:10.465621 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 30 15:32:10.465685 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 15:32:10.465746 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 15:32:10.465806 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 15:32:10.465865 kernel: scsi host0: ahci Jan 30 15:32:10.465927 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 15:32:10.465987 kernel: scsi host1: ahci Jan 30 15:32:10.466047 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 15:32:10.466107 kernel: scsi host2: ahci Jan 30 15:32:10.466165 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 15:32:10.466223 kernel: scsi host3: ahci Jan 30 15:32:10.466280 kernel: hub 1-0:1.0: USB hub found Jan 30 15:32:10.466346 kernel: scsi host4: ahci Jan 30 15:32:10.466409 kernel: hub 1-0:1.0: 16 ports detected Jan 30 15:32:10.466470 kernel: scsi host5: ahci Jan 30 15:32:10.466528 kernel: hub 2-0:1.0: USB hub found Jan 30 15:32:10.466589 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 15:32:10.466598 kernel: scsi host6: ahci Jan 30 15:32:10.466653 kernel: scsi host7: ahci Jan 30 15:32:10.466712 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Jan 30 15:32:10.466720 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Jan 30 15:32:10.466729 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Jan 30 15:32:10.466736 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Jan 30 15:32:10.466743 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Jan 30 15:32:10.466749 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Jan 30 15:32:10.466756 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Jan 30 15:32:10.466763 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Jan 30 15:32:10.466770 kernel: hub 2-0:1.0: 10 ports detected Jan 30 15:32:10.466827 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 15:32:10.466836 kernel: pps pps0: new PPS source ptp0 Jan 30 15:32:10.466896 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 15:32:10.573836 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 30 15:32:10.573913 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.573922 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 15:32:10.573985 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.573993 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d2 Jan 30 15:32:10.574057 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 15:32:10.574066 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 30 15:32:10.574126 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574134 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 15:32:10.574193 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574201 kernel: pps pps1: new PPS source ptp1 Jan 30 15:32:10.574257 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 15:32:10.574265 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 30 15:32:10.574330 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 15:32:10.574339 kernel: hub 1-14:1.0: USB hub found Jan 30 15:32:10.574447 kernel: hub 1-14:1.0: 4 ports detected Jan 30 15:32:10.574506 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 15:32:10.574565 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 30 15:32:10.574573 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d3 Jan 30 15:32:10.574632 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574640 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 30 15:32:10.574700 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 15:32:10.574708 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 15:32:10.574766 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 15:32:10.574774 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Jan 30 15:32:11.223702 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 15:32:11.223713 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 15:32:11.223792 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 15:32:11.223911 kernel: ata2.00: Features: NCQ-prio Jan 30 15:32:11.223921 kernel: ata1.00: Features: NCQ-prio Jan 30 15:32:11.223928 kernel: ata2.00: configured for UDMA/133 Jan 30 15:32:11.223936 kernel: ata1.00: configured for UDMA/133 Jan 30 15:32:11.223943 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 15:32:11.224023 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 30 15:32:11.224087 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 30 15:32:11.224160 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 15:32:11.224171 kernel: usbcore: registered new interface driver usbhid Jan 30 15:32:11.224178 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 30 15:32:11.224245 kernel: usbhid: USB HID core driver Jan 30 15:32:11.224254 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 15:32:11.224261 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 15:32:11.224268 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 15:32:11.224330 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.224338 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 15:32:11.224408 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 15:32:11.224470 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 15:32:11.224530 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 15:32:11.224599 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 15:32:11.224669 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 15:32:11.224729 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 15:32:11.224788 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.224796 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:32:11.224806 kernel: GPT:9289727 != 937703087 Jan 30 15:32:11.224813 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:32:11.224820 kernel: GPT:9289727 != 937703087 Jan 30 15:32:11.224827 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:32:11.224834 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.224841 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 15:32:11.224899 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 30 15:32:11.224964 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 15:32:11.225024 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 15:32:11.225095 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 15:32:11.225155 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 15:32:11.225164 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 15:32:11.225222 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 15:32:11.225288 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 15:32:11.225358 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 15:32:11.225421 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 15:32:11.225429 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 15:32:11.225486 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 15:32:11.225551 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Jan 30 15:32:11.837571 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (672) Jan 30 15:32:11.837600 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 15:32:11.837818 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (577) Jan 30 15:32:11.837848 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.837878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.837904 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.837929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.837948 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:11.837970 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:11.837990 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 15:32:11.838175 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 30 15:32:11.838360 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 15:32:09.583660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:32:09.618952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:32:11.869499 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 30 15:32:11.869580 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 30 15:32:09.619160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:10.561471 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:32:10.623639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:32:10.699595 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:32:11.151572 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:32:11.939429 disk-uuid[709]: Primary Header is updated. Jan 30 15:32:11.939429 disk-uuid[709]: Secondary Entries is updated. Jan 30 15:32:11.939429 disk-uuid[709]: Secondary Header is updated. Jan 30 15:32:11.166158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:32:11.166232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:32:11.203485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:32:11.225600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:11.343145 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 30 15:32:11.354553 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:32:11.371026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 30 15:32:11.386027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 15:32:11.400499 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 15:32:11.411429 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 30 15:32:11.428460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:32:11.445486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:32:11.476679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:12.525452 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 15:32:12.544973 disk-uuid[710]: The operation has completed successfully. Jan 30 15:32:12.553469 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 15:32:12.581261 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:32:12.581313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:32:12.602609 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:32:12.648566 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 15:32:12.648635 sh[750]: Success Jan 30 15:32:12.683034 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:32:12.702259 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:32:12.718695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:32:12.761168 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 15:32:12.761186 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:12.782616 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:32:12.801686 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:32:12.819736 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:32:12.857390 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 15:32:12.859879 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:32:12.868768 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:32:12.884570 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:32:12.902772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:32:13.002524 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:13.002538 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:13.002549 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:32:13.002556 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:32:13.002563 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:32:13.026427 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:13.037685 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:32:13.058558 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:32:13.085024 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:32:13.115475 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:32:13.125100 unknown[839]: fetched base config from "system" Jan 30 15:32:13.122893 ignition[839]: Ignition 2.19.0 Jan 30 15:32:13.125104 unknown[839]: fetched user config from "system" Jan 30 15:32:13.122898 ignition[839]: Stage: fetch-offline Jan 30 15:32:13.126048 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:32:13.122920 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:13.126471 systemd-networkd[934]: lo: Link UP Jan 30 15:32:13.122926 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:13.126473 systemd-networkd[934]: lo: Gained carrier Jan 30 15:32:13.122980 ignition[839]: parsed url from cmdline: "" Jan 30 15:32:13.128803 systemd-networkd[934]: Enumeration completed Jan 30 15:32:13.122982 ignition[839]: no config URL provided Jan 30 15:32:13.129722 systemd-networkd[934]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.122984 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:32:13.143608 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:32:13.123007 ignition[839]: parsing config with SHA512: 5602b49db9414e8fb4d7bf652760780e862feb096e32c0b30bd06c2bf23667567130a37fd8150f35e0cae8ffa3a761dad1bfb7dbe85d194ebbef4b33aa2d60d3 Jan 30 15:32:13.157709 systemd-networkd[934]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.125362 ignition[839]: fetch-offline: fetch-offline passed Jan 30 15:32:13.161908 systemd[1]: Reached target network.target - Network. Jan 30 15:32:13.125364 ignition[839]: POST message to Packet Timeline Jan 30 15:32:13.175513 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 15:32:13.125367 ignition[839]: POST Status error: resource requires networking Jan 30 15:32:13.186223 systemd-networkd[934]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.125404 ignition[839]: Ignition finished successfully Jan 30 15:32:13.188613 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:32:13.209269 ignition[947]: Ignition 2.19.0 Jan 30 15:32:13.396514 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 30 15:32:13.392202 systemd-networkd[934]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:32:13.209283 ignition[947]: Stage: kargs Jan 30 15:32:13.209636 ignition[947]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:13.209657 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:13.211644 ignition[947]: kargs: kargs passed Jan 30 15:32:13.211654 ignition[947]: POST message to Packet Timeline Jan 30 15:32:13.211679 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:13.213014 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56558->[::1]:53: read: connection refused Jan 30 15:32:13.413836 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 15:32:13.414781 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44571->[::1]:53: read: connection refused Jan 30 15:32:13.621387 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 30 15:32:13.622752 systemd-networkd[934]: eno1: Link UP Jan 30 15:32:13.622890 systemd-networkd[934]: eno2: Link UP Jan 30 15:32:13.623015 systemd-networkd[934]: enp2s0f0np0: Link UP Jan 30 15:32:13.623166 systemd-networkd[934]: enp2s0f0np0: Gained carrier Jan 30 15:32:13.632577 systemd-networkd[934]: enp2s0f1np1: Link UP Jan 30 15:32:13.663542 systemd-networkd[934]: enp2s0f0np0: DHCPv4 address 139.178.70.183/31, gateway 139.178.70.182 acquired from 145.40.83.140 Jan 30 15:32:13.815217 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 15:32:13.816490 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54656->[::1]:53: read: connection refused Jan 30 15:32:14.423032 systemd-networkd[934]: enp2s0f1np1: Gained carrier Jan 30 15:32:14.616726 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 15:32:14.617855 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56038->[::1]:53: read: connection refused Jan 30 15:32:14.678856 systemd-networkd[934]: enp2s0f0np0: Gained IPv6LL Jan 30 15:32:15.574864 systemd-networkd[934]: enp2s0f1np1: Gained IPv6LL Jan 30 15:32:16.219675 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 15:32:16.220781 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50850->[::1]:53: read: connection refused Jan 30 15:32:19.424306 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 15:32:19.965170 ignition[947]: GET result: OK Jan 30 15:32:20.363415 ignition[947]: Ignition finished successfully Jan 30 15:32:20.368103 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:32:20.400574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:32:20.406996 ignition[962]: Ignition 2.19.0 Jan 30 15:32:20.407000 ignition[962]: Stage: disks Jan 30 15:32:20.407110 ignition[962]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:20.407116 ignition[962]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:20.407710 ignition[962]: disks: disks passed Jan 30 15:32:20.407713 ignition[962]: POST message to Packet Timeline Jan 30 15:32:20.407723 ignition[962]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:21.034819 ignition[962]: GET result: OK Jan 30 15:32:21.392886 ignition[962]: Ignition finished successfully Jan 30 15:32:21.395416 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:32:21.411692 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:32:21.429593 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:32:21.450578 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:32:21.471732 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:32:21.492641 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:32:21.512598 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:32:21.549322 systemd-fsck[979]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 15:32:21.560783 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:32:21.590542 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:32:21.687351 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 15:32:21.687716 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:32:21.696834 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:32:21.713583 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:32:21.739109 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:32:21.783445 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (988) Jan 30 15:32:21.783458 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:21.753921 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 15:32:21.863453 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:21.863464 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:32:21.863475 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:32:21.863482 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:32:21.882910 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 15:32:21.893427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:32:21.893445 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:32:21.925795 coreos-metadata[990]: Jan 30 15:32:21.925 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 15:32:21.956761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:32:21.966534 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:32:21.991527 coreos-metadata[1006]: Jan 30 15:32:21.969 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 15:32:21.993623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:32:22.022543 initrd-setup-root[1020]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:32:22.033492 initrd-setup-root[1027]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:32:22.043478 initrd-setup-root[1034]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:32:22.054478 initrd-setup-root[1041]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:32:22.074906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:32:22.102595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:32:22.137547 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:22.122233 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:32:22.146175 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:32:22.169941 ignition[1108]: INFO : Ignition 2.19.0 Jan 30 15:32:22.169941 ignition[1108]: INFO : Stage: mount Jan 30 15:32:22.184453 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:22.184453 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:22.184453 ignition[1108]: INFO : mount: mount passed Jan 30 15:32:22.184453 ignition[1108]: INFO : POST message to Packet Timeline Jan 30 15:32:22.184453 ignition[1108]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:22.179646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:32:22.493092 coreos-metadata[1006]: Jan 30 15:32:22.492 INFO Fetch successful Jan 30 15:32:22.543123 coreos-metadata[990]: Jan 30 15:32:22.543 INFO Fetch successful Jan 30 15:32:22.571018 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 15:32:22.571077 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 15:32:22.614527 coreos-metadata[990]: Jan 30 15:32:22.578 INFO wrote hostname ci-4081.3.0-a-8297fae690 to /sysroot/etc/hostname Jan 30 15:32:22.582668 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 15:32:22.822793 ignition[1108]: INFO : GET result: OK Jan 30 15:32:23.170263 ignition[1108]: INFO : Ignition finished successfully Jan 30 15:32:23.172738 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:32:23.206609 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:32:23.217747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:32:23.262352 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1132) Jan 30 15:32:23.297253 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:32:23.297269 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:32:23.314410 kernel: BTRFS info (device sda6): using free space tree Jan 30 15:32:23.351524 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 15:32:23.351540 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 15:32:23.364290 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:32:23.385860 ignition[1149]: INFO : Ignition 2.19.0 Jan 30 15:32:23.385860 ignition[1149]: INFO : Stage: files Jan 30 15:32:23.399611 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:23.399611 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:23.399611 ignition[1149]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:32:23.399611 ignition[1149]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:32:23.399611 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 15:32:23.390090 unknown[1149]: wrote ssh authorized keys file for user: core Jan 30 15:32:23.565559 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 15:32:23.904929 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:32:23.904929 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:23.937556 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 15:32:24.372400 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 15:32:24.598355 ignition[1149]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:32:24.598355 ignition[1149]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:32:24.627657 ignition[1149]: INFO : files: files passed Jan 30 15:32:24.627657 ignition[1149]: INFO : POST message to Packet Timeline Jan 30 15:32:24.627657 ignition[1149]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:25.378738 ignition[1149]: INFO : GET result: OK Jan 30 15:32:25.761008 ignition[1149]: INFO : Ignition finished successfully Jan 30 15:32:25.763888 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:32:25.803661 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:32:25.804266 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:32:25.822851 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:32:25.822924 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:32:25.869395 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:32:25.885944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:32:25.916554 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:32:25.916554 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:32:25.930568 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:32:25.923626 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:32:25.994542 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:32:25.994873 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:32:26.015545 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:32:26.035702 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:32:26.055831 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:32:26.065752 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:32:26.145755 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:32:26.169729 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:32:26.198325 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:32:26.209831 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:32:26.232043 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:32:26.249969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:32:26.250383 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:32:26.278088 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:32:26.300983 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:32:26.318961 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:32:26.338965 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:32:26.359956 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:32:26.380973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:32:26.402102 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:32:26.422998 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:32:26.443979 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:32:26.465093 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:32:26.482854 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:32:26.483256 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:32:26.518788 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:32:26.528985 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:32:26.549838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:32:26.550303 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:32:26.572986 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:32:26.573412 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:32:26.604937 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:32:26.605413 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:32:26.625174 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:32:26.644830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:32:26.645288 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:32:26.665972 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:32:26.683989 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:32:26.701892 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:32:26.702186 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:32:26.722996 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:32:26.723297 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:32:26.746047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:32:26.746476 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:32:26.767053 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:32:26.873477 ignition[1211]: INFO : Ignition 2.19.0 Jan 30 15:32:26.873477 ignition[1211]: INFO : Stage: umount Jan 30 15:32:26.873477 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:32:26.873477 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 15:32:26.873477 ignition[1211]: INFO : umount: umount passed Jan 30 15:32:26.873477 ignition[1211]: INFO : POST message to Packet Timeline Jan 30 15:32:26.873477 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 15:32:26.767454 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:32:26.786039 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 15:32:26.786455 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 15:32:26.815543 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:32:26.845610 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:32:26.864438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:32:26.864530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:32:26.884579 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:32:26.884682 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:32:26.916377 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:32:26.920381 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:32:26.920631 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:32:27.006426 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:32:27.006712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:32:27.463825 ignition[1211]: INFO : GET result: OK Jan 30 15:32:27.817816 ignition[1211]: INFO : Ignition finished successfully Jan 30 15:32:27.820928 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:32:27.821215 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:32:27.838643 systemd[1]: Stopped target network.target - Network. Jan 30 15:32:27.853645 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:32:27.853829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:32:27.871729 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:32:27.871869 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:32:27.889678 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:32:27.889809 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:32:27.907789 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:32:27.907949 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:32:27.925735 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:32:27.925904 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:32:27.945191 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:32:27.954505 systemd-networkd[934]: enp2s0f0np0: DHCPv6 lease lost Jan 30 15:32:27.962561 systemd-networkd[934]: enp2s0f1np1: DHCPv6 lease lost Jan 30 15:32:27.962900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:32:27.981539 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:32:27.981813 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:32:28.000369 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:32:28.000689 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:32:28.020877 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:32:28.020988 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:32:28.059528 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:32:28.067508 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:32:28.067563 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:32:28.089682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:32:28.089751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:32:28.107767 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:32:28.107885 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:32:28.125845 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:32:28.126007 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:32:28.147065 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:32:28.166795 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:32:28.167258 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:32:28.203607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:32:28.203752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:32:28.206973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:32:28.207078 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:32:28.236719 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:32:28.236860 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:32:28.267050 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:32:28.267326 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:32:28.305532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:32:28.305782 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:32:28.354691 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:32:28.377514 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:32:28.621486 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Jan 30 15:32:28.377743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:32:28.399690 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:32:28.399835 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:32:28.418656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:32:28.418789 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:32:28.440639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:32:28.440774 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:28.462663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:32:28.462877 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:32:28.483179 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:32:28.483424 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:32:28.505309 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:32:28.541848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:32:28.559025 systemd[1]: Switching root. Jan 30 15:32:28.731633 systemd-journald[266]: Journal stopped Jan 30 15:32:31.177159 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:32:31.177175 kernel: SELinux: policy capability open_perms=1 Jan 30 15:32:31.177183 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:32:31.177190 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:32:31.177195 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:32:31.177200 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:32:31.177206 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:32:31.177212 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:32:31.177217 kernel: audit: type=1403 audit(1738251148.975:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:32:31.177224 systemd[1]: Successfully loaded SELinux policy in 156.061ms. Jan 30 15:32:31.177232 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.863ms. Jan 30 15:32:31.177239 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:32:31.177246 systemd[1]: Detected architecture x86-64. Jan 30 15:32:31.177252 systemd[1]: Detected first boot. Jan 30 15:32:31.177258 systemd[1]: Hostname set to . Jan 30 15:32:31.177266 systemd[1]: Initializing machine ID from random generator. Jan 30 15:32:31.177273 zram_generator::config[1280]: No configuration found. Jan 30 15:32:31.177280 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:32:31.177286 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:32:31.177292 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 15:32:31.177299 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:32:31.177306 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:32:31.177312 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:32:31.177319 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:32:31.177326 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:32:31.177332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:32:31.177339 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:32:31.177345 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:32:31.177364 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:32:31.177371 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:32:31.177378 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:32:31.177385 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:32:31.177391 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:32:31.177398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:32:31.177404 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 30 15:32:31.177411 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:32:31.177419 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:32:31.177426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:32:31.177432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:32:31.177439 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:32:31.177447 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:32:31.177454 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:32:31.177461 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:32:31.177468 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:32:31.177475 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:32:31.177482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:32:31.177489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:32:31.177496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:32:31.177502 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:32:31.177509 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:32:31.177517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:32:31.177524 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:32:31.177531 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:32:31.177538 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:32:31.177545 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:32:31.177552 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:32:31.177559 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:32:31.177567 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:32:31.177574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:32:31.177581 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:32:31.177588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:32:31.177594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:32:31.177601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:32:31.177608 kernel: ACPI: bus type drm_connector registered Jan 30 15:32:31.177614 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:32:31.177622 kernel: fuse: init (API version 7.39) Jan 30 15:32:31.177628 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:32:31.177635 kernel: loop: module loaded Jan 30 15:32:31.177642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:32:31.177649 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 15:32:31.177656 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 15:32:31.177662 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:32:31.177669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:32:31.177685 systemd-journald[1402]: Collecting audit messages is disabled. Jan 30 15:32:31.177700 systemd-journald[1402]: Journal started Jan 30 15:32:31.177716 systemd-journald[1402]: Runtime Journal (/run/log/journal/23e08d03a8d84d5b8642ddb6bd4351a1) is 8.0M, max 636.6M, 628.6M free. Jan 30 15:32:31.218409 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:32:31.253394 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:32:31.288404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:32:31.340398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:32:31.361400 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:32:31.372361 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:32:31.382640 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:32:31.392606 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:32:31.402578 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:32:31.412592 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:32:31.422607 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:32:31.432806 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:32:31.443875 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:32:31.455981 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:32:31.456274 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:32:31.468191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:32:31.468641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:32:31.480195 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:32:31.480645 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:32:31.491191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:32:31.491648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:32:31.503199 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:32:31.503625 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:32:31.514189 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:32:31.514711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:32:31.524735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:32:31.534697 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:32:31.546712 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:32:31.558774 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:32:31.576652 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:32:31.598511 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:32:31.609384 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:32:31.619520 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:32:31.621190 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:32:31.632083 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:32:31.643492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:32:31.644228 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:32:31.646904 systemd-journald[1402]: Time spent on flushing to /var/log/journal/23e08d03a8d84d5b8642ddb6bd4351a1 is 13.070ms for 1391 entries. Jan 30 15:32:31.646904 systemd-journald[1402]: System Journal (/var/log/journal/23e08d03a8d84d5b8642ddb6bd4351a1) is 8.0M, max 195.6M, 187.6M free. Jan 30 15:32:31.683330 systemd-journald[1402]: Received client request to flush runtime journal. Jan 30 15:32:31.662498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:32:31.663160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:32:31.679164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:32:31.692661 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:32:31.704046 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jan 30 15:32:31.704056 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jan 30 15:32:31.705898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:32:31.717553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:32:31.728653 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:32:31.739598 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:32:31.750587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:32:31.760583 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:32:31.774068 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:32:31.796517 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:32:31.806770 udevadm[1445]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 15:32:31.812676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:32:31.832551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:32:31.840263 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jan 30 15:32:31.840273 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jan 30 15:32:31.843678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:32:32.008744 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:32:32.031647 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:32:32.044093 systemd-udevd[1466]: Using default interface naming scheme 'v255'. Jan 30 15:32:32.060540 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:32:32.077537 systemd[1]: Found device dev-ttyS1.device - /dev/ttyS1. Jan 30 15:32:32.109154 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 30 15:32:32.109213 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1476) Jan 30 15:32:32.109230 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 15:32:32.148049 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 15:32:32.168357 kernel: IPMI message handler: version 39.2 Jan 30 15:32:32.168406 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:32:32.168416 kernel: ACPI: button: Power Button [PWRF] Jan 30 15:32:32.220120 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 30 15:32:32.249424 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 30 15:32:32.291581 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 30 15:32:32.291726 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 30 15:32:32.295355 kernel: iTCO_vendor_support: vendor-support=0 Jan 30 15:32:32.295383 kernel: ipmi device interface Jan 30 15:32:32.295394 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 30 15:32:32.295475 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 30 15:32:32.382744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:32:32.394177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:32:32.413294 kernel: ipmi_si: IPMI System Interface driver Jan 30 15:32:32.413341 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Jan 30 15:32:32.413456 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 30 15:32:32.459314 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 30 15:32:32.459327 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 30 15:32:32.459335 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 30 15:32:32.579288 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 30 15:32:32.579394 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 30 15:32:32.579490 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 30 15:32:32.579506 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 30 15:32:32.424493 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:32:32.624720 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:32:32.647081 kernel: intel_rapl_common: Found RAPL domain package Jan 30 15:32:32.647134 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 30 15:32:32.647379 kernel: intel_rapl_common: Found RAPL domain core Jan 30 15:32:32.689006 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Jan 30 15:32:32.689284 kernel: intel_rapl_common: Found RAPL domain uncore Jan 30 15:32:32.689306 kernel: intel_rapl_common: Found RAPL domain dram Jan 30 15:32:32.740352 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 30 15:32:32.792351 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 30 15:32:32.794615 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:32:32.806705 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:32:32.818488 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:32:32.829576 systemd-networkd[1556]: lo: Link UP Jan 30 15:32:32.829579 systemd-networkd[1556]: lo: Gained carrier Jan 30 15:32:32.832350 systemd-networkd[1556]: bond0: netdev ready Jan 30 15:32:32.833256 systemd-networkd[1556]: Enumeration completed Jan 30 15:32:32.833351 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:32:32.837776 lvm[1582]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:32:32.838189 systemd-networkd[1556]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:80.network. Jan 30 15:32:32.843235 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:32:32.887840 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:32:32.899766 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:32:32.920470 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:32:32.922543 lvm[1590]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:32:32.961861 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:32:32.972792 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:32:32.983457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:32:32.983472 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:32:32.993431 systemd[1]: Reached target machines.target - Containers. Jan 30 15:32:33.002021 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:32:33.022443 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:32:33.034107 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:32:33.043519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:32:33.044301 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:32:33.056026 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:32:33.067210 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:32:33.067748 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:32:33.092357 kernel: loop0: detected capacity change from 0 to 8 Jan 30 15:32:33.093109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:32:33.111285 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:32:33.111703 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:32:33.116382 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:32:33.173355 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 15:32:33.244354 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 15:32:33.269354 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 30 15:32:33.293395 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Jan 30 15:32:33.294180 systemd-networkd[1556]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:81.network. Jan 30 15:32:33.328352 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 15:32:33.387363 ldconfig[1595]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:32:33.388660 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:32:33.399374 kernel: loop4: detected capacity change from 0 to 8 Jan 30 15:32:33.419649 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 15:32:33.448354 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 15:32:33.468399 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 30 15:32:33.468536 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 15:32:33.474354 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Jan 30 15:32:33.486844 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 30 15:32:33.487069 (sd-merge)[1612]: Merged extensions into '/usr'. Jan 30 15:32:33.503535 systemd-networkd[1556]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 30 15:32:33.505174 systemd-networkd[1556]: enp2s0f0np0: Link UP Jan 30 15:32:33.505331 systemd-networkd[1556]: enp2s0f0np0: Gained carrier Jan 30 15:32:33.505655 systemd[1]: Reloading requested from client PID 1599 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:32:33.505661 systemd[1]: Reloading... Jan 30 15:32:33.525410 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 30 15:32:33.525460 zram_generator::config[1641]: No configuration found. Jan 30 15:32:33.544620 systemd-networkd[1556]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:a2:80.network. Jan 30 15:32:33.544772 systemd-networkd[1556]: enp2s0f1np1: Link UP Jan 30 15:32:33.544921 systemd-networkd[1556]: enp2s0f1np1: Gained carrier Jan 30 15:32:33.551551 systemd-networkd[1556]: bond0: Link UP Jan 30 15:32:33.551717 systemd-networkd[1556]: bond0: Gained carrier Jan 30 15:32:33.598409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:32:33.629928 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 30 15:32:33.629952 kernel: bond0: active interface up! Jan 30 15:32:33.649115 systemd[1]: Reloading finished in 143 ms. Jan 30 15:32:33.661519 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:32:33.682521 systemd[1]: Starting ensure-sysext.service... Jan 30 15:32:33.690071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:32:33.707503 systemd[1]: Reloading requested from client PID 1702 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:32:33.707511 systemd[1]: Reloading... Jan 30 15:32:33.716344 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:32:33.716569 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:32:33.717051 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:32:33.717214 systemd-tmpfiles[1703]: ACLs are not supported, ignoring. Jan 30 15:32:33.717266 systemd-tmpfiles[1703]: ACLs are not supported, ignoring. Jan 30 15:32:33.718804 systemd-tmpfiles[1703]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:32:33.718810 systemd-tmpfiles[1703]: Skipping /boot Jan 30 15:32:33.723710 systemd-tmpfiles[1703]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:32:33.723718 systemd-tmpfiles[1703]: Skipping /boot Jan 30 15:32:33.741416 zram_generator::config[1731]: No configuration found. Jan 30 15:32:33.763402 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 30 15:32:33.805423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:32:33.856419 systemd[1]: Reloading finished in 148 ms. Jan 30 15:32:33.869152 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:32:33.893687 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:32:33.905285 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:32:33.917224 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:32:33.922881 augenrules[1814]: No rules Jan 30 15:32:33.929633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:32:33.940248 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:32:33.951763 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:32:33.961691 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:32:33.972604 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:32:34.005244 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:32:34.017367 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:32:34.017541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:32:34.018221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:32:34.028016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:32:34.036726 systemd-resolved[1820]: Positive Trust Anchors: Jan 30 15:32:34.036733 systemd-resolved[1820]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:32:34.036757 systemd-resolved[1820]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:32:34.039081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:32:34.039412 systemd-resolved[1820]: Using system hostname 'ci-4081.3.0-a-8297fae690'. Jan 30 15:32:34.048503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:32:34.057001 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:32:34.066439 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:32:34.066501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:32:34.067045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:32:34.076772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:32:34.076864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:32:34.087650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:32:34.087732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:32:34.098649 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:32:34.098731 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:32:34.108710 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:32:34.121993 systemd[1]: Reached target network.target - Network. Jan 30 15:32:34.131503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:32:34.142491 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:32:34.142626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:32:34.150509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:32:34.160964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:32:34.170991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:32:34.182998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:32:34.192594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:32:34.192672 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:32:34.192724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:32:34.193364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:32:34.193485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:32:34.204696 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:32:34.204783 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:32:34.214664 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:32:34.214740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:32:34.225634 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:32:34.225712 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:32:34.236423 systemd[1]: Finished ensure-sysext.service. Jan 30 15:32:34.245956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:32:34.245988 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:32:34.254555 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:32:34.302813 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:32:34.313531 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:32:34.323470 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:32:34.334439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:32:34.345434 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:32:34.356525 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:32:34.356542 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:32:34.364463 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:32:34.374527 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:32:34.384480 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:32:34.395416 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:32:34.403640 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:32:34.414206 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:32:34.423127 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:32:34.432661 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:32:34.442447 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:32:34.452515 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:32:34.460953 systemd[1]: System is tainted: cgroupsv1 Jan 30 15:32:34.461103 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:32:34.461216 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:32:34.479432 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:32:34.490179 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:32:34.500044 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:32:34.509064 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:32:34.512556 coreos-metadata[1866]: Jan 30 15:32:34.512 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 15:32:34.519325 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:32:34.519711 dbus-daemon[1867]: [system] SELinux support is enabled Jan 30 15:32:34.521129 jq[1870]: false Jan 30 15:32:34.528455 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:32:34.529179 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:32:34.537307 extend-filesystems[1872]: Found loop4 Jan 30 15:32:34.537307 extend-filesystems[1872]: Found loop5 Jan 30 15:32:34.584389 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jan 30 15:32:34.584407 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1527) Jan 30 15:32:34.584417 extend-filesystems[1872]: Found loop6 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found loop7 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda1 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda2 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda3 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found usr Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda4 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda6 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda7 Jan 30 15:32:34.584417 extend-filesystems[1872]: Found sda9 Jan 30 15:32:34.584417 extend-filesystems[1872]: Checking size of /dev/sda9 Jan 30 15:32:34.584417 extend-filesystems[1872]: Resized partition /dev/sda9 Jan 30 15:32:34.540212 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 15:32:34.766504 extend-filesystems[1881]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:32:34.585070 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:32:34.599058 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:32:34.614717 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:32:34.642296 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 30 15:32:34.657103 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:32:34.669280 systemd-logind[1895]: Watching system buttons on /dev/input/event3 (Power Button) Jan 30 15:32:34.786581 update_engine[1901]: I20250130 15:32:34.687072 1901 main.cc:92] Flatcar Update Engine starting Jan 30 15:32:34.786581 update_engine[1901]: I20250130 15:32:34.687762 1901 update_check_scheduler.cc:74] Next update check in 5m43s Jan 30 15:32:34.669289 systemd-logind[1895]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 15:32:34.786771 jq[1902]: true Jan 30 15:32:34.669299 systemd-logind[1895]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 30 15:32:34.669418 systemd-logind[1895]: New seat seat0. Jan 30 15:32:34.672125 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:32:34.679707 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:32:34.710702 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:32:34.741494 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:32:34.741626 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:32:34.741763 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:32:34.741883 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:32:34.758891 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:32:34.759009 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:32:34.788651 jq[1907]: true Jan 30 15:32:34.789249 (ntainerd)[1908]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:32:34.792578 dbus-daemon[1867]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 15:32:34.797072 tar[1906]: linux-amd64/helm Jan 30 15:32:34.799068 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 30 15:32:34.799211 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 30 15:32:34.802893 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:32:34.814024 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:32:34.814127 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:32:34.825446 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:32:34.825527 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:32:34.836736 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:32:34.840799 bash[1936]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:32:34.849546 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:32:34.861489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:32:34.873948 systemd[1]: Starting sshkeys.service... Jan 30 15:32:34.875396 sshd_keygen[1899]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:32:34.877794 locksmithd[1939]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:32:34.885881 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:32:34.898218 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:32:34.909771 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:32:34.920478 coreos-metadata[1955]: Jan 30 15:32:34.920 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 15:32:34.921057 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:32:34.934363 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:32:34.934494 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:32:34.945946 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:32:34.960275 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:32:34.983625 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:32:34.991682 containerd[1908]: time="2025-01-30T15:32:34.991634619Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 15:32:34.992328 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 30 15:32:35.002555 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:32:35.004740 containerd[1908]: time="2025-01-30T15:32:35.004723925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005480 containerd[1908]: time="2025-01-30T15:32:35.005463176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005501 containerd[1908]: time="2025-01-30T15:32:35.005481064Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:32:35.005501 containerd[1908]: time="2025-01-30T15:32:35.005491162Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:32:35.005580 containerd[1908]: time="2025-01-30T15:32:35.005571939Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:32:35.005602 containerd[1908]: time="2025-01-30T15:32:35.005583039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005624 containerd[1908]: time="2025-01-30T15:32:35.005615929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005639 containerd[1908]: time="2025-01-30T15:32:35.005624040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005744 containerd[1908]: time="2025-01-30T15:32:35.005735328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005763 containerd[1908]: time="2025-01-30T15:32:35.005744866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005763 containerd[1908]: time="2025-01-30T15:32:35.005752337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005763 containerd[1908]: time="2025-01-30T15:32:35.005757683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005809 containerd[1908]: time="2025-01-30T15:32:35.005797621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005914 containerd[1908]: time="2025-01-30T15:32:35.005906610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:32:35.005987 containerd[1908]: time="2025-01-30T15:32:35.005978892Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:32:35.006007 containerd[1908]: time="2025-01-30T15:32:35.005987900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:32:35.006033 containerd[1908]: time="2025-01-30T15:32:35.006026730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:32:35.006058 containerd[1908]: time="2025-01-30T15:32:35.006052424Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:32:35.021291 containerd[1908]: time="2025-01-30T15:32:35.021261799Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:32:35.021317 containerd[1908]: time="2025-01-30T15:32:35.021289533Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:32:35.021317 containerd[1908]: time="2025-01-30T15:32:35.021302886Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:32:35.021317 containerd[1908]: time="2025-01-30T15:32:35.021311883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:32:35.021369 containerd[1908]: time="2025-01-30T15:32:35.021320413Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:32:35.021407 containerd[1908]: time="2025-01-30T15:32:35.021398862Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:32:35.021582 containerd[1908]: time="2025-01-30T15:32:35.021572247Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:32:35.021643 containerd[1908]: time="2025-01-30T15:32:35.021635135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:32:35.021660 containerd[1908]: time="2025-01-30T15:32:35.021645379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:32:35.021660 containerd[1908]: time="2025-01-30T15:32:35.021653333Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:32:35.021699 containerd[1908]: time="2025-01-30T15:32:35.021664958Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021699 containerd[1908]: time="2025-01-30T15:32:35.021675858Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021699 containerd[1908]: time="2025-01-30T15:32:35.021682967Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021699 containerd[1908]: time="2025-01-30T15:32:35.021690844Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021753 containerd[1908]: time="2025-01-30T15:32:35.021699304Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021753 containerd[1908]: time="2025-01-30T15:32:35.021706529Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021753 containerd[1908]: time="2025-01-30T15:32:35.021713360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021753 containerd[1908]: time="2025-01-30T15:32:35.021719364Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:32:35.021753 containerd[1908]: time="2025-01-30T15:32:35.021730589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021753 containerd[1908]: time="2025-01-30T15:32:35.021738749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021753 containerd[1908]: time="2025-01-30T15:32:35.021746115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021753839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021760585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021767998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021774303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021780966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021787698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021795368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021801483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021807733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021816035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021824769Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021835695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021842577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.021857 containerd[1908]: time="2025-01-30T15:32:35.021848674Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021873249Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021883112Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021889271Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021896488Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021901931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021908474Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021914193Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:32:35.022070 containerd[1908]: time="2025-01-30T15:32:35.021919720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:32:35.022198 containerd[1908]: time="2025-01-30T15:32:35.022072528Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:32:35.022198 containerd[1908]: time="2025-01-30T15:32:35.022111304Z" level=info msg="Connect containerd service" Jan 30 15:32:35.022198 containerd[1908]: time="2025-01-30T15:32:35.022129969Z" level=info msg="using legacy CRI server" Jan 30 15:32:35.022198 containerd[1908]: time="2025-01-30T15:32:35.022134023Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:32:35.022557 containerd[1908]: time="2025-01-30T15:32:35.022543240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:32:35.022984 containerd[1908]: time="2025-01-30T15:32:35.022968815Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:32:35.023083 containerd[1908]: time="2025-01-30T15:32:35.023059488Z" level=info msg="Start subscribing containerd event" Jan 30 15:32:35.023105 containerd[1908]: time="2025-01-30T15:32:35.023091952Z" level=info msg="Start recovering state" Jan 30 15:32:35.023171 containerd[1908]: time="2025-01-30T15:32:35.023163879Z" level=info msg="Start event monitor" Jan 30 15:32:35.023191 containerd[1908]: time="2025-01-30T15:32:35.023167683Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:32:35.023191 containerd[1908]: time="2025-01-30T15:32:35.023174846Z" level=info msg="Start snapshots syncer" Jan 30 15:32:35.023191 containerd[1908]: time="2025-01-30T15:32:35.023184353Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:32:35.023191 containerd[1908]: time="2025-01-30T15:32:35.023188479Z" level=info msg="Start streaming server" Jan 30 15:32:35.023261 containerd[1908]: time="2025-01-30T15:32:35.023196695Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:32:35.023261 containerd[1908]: time="2025-01-30T15:32:35.023231602Z" level=info msg="containerd successfully booted in 0.032119s" Jan 30 15:32:35.023291 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:32:35.080355 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jan 30 15:32:35.101642 extend-filesystems[1881]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 15:32:35.101642 extend-filesystems[1881]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 30 15:32:35.101642 extend-filesystems[1881]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jan 30 15:32:35.132434 extend-filesystems[1872]: Resized filesystem in /dev/sda9 Jan 30 15:32:35.132434 extend-filesystems[1872]: Found sdb Jan 30 15:32:35.132528 tar[1906]: linux-amd64/LICENSE Jan 30 15:32:35.132528 tar[1906]: linux-amd64/README.md Jan 30 15:32:35.102176 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:32:35.102313 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:32:35.166382 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 15:32:35.350505 systemd-networkd[1556]: bond0: Gained IPv6LL Jan 30 15:32:35.351735 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:32:35.363158 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:32:35.383552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:32:35.394172 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:32:35.412031 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:32:36.027328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:32:36.044696 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:32:36.208152 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Jan 30 15:32:36.208599 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Jan 30 15:32:36.572071 kubelet[2024]: E0130 15:32:36.572014 2024 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:32:36.573313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:32:36.573441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:32:38.263021 systemd-resolved[1820]: Clock change detected. Flushing caches. Jan 30 15:32:38.263229 systemd-timesyncd[1860]: Contacted time server 23.142.248.8:123 (0.flatcar.pool.ntp.org). Jan 30 15:32:38.263364 systemd-timesyncd[1860]: Initial clock synchronization to Thu 2025-01-30 15:32:38.262872 UTC. Jan 30 15:32:38.343740 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:32:38.362842 systemd[1]: Started sshd@0-139.178.70.183:22-147.75.109.163:40660.service - OpenSSH per-connection server daemon (147.75.109.163:40660). Jan 30 15:32:38.402042 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 40660 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:38.403487 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:38.409328 systemd-logind[1895]: New session 1 of user core. Jan 30 15:32:38.410037 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:32:38.430887 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:32:38.444767 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:32:38.474300 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:32:38.499555 (systemd)[2051]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:32:38.614081 systemd[2051]: Queued start job for default target default.target. Jan 30 15:32:38.614250 systemd[2051]: Created slice app.slice - User Application Slice. Jan 30 15:32:38.614262 systemd[2051]: Reached target paths.target - Paths. Jan 30 15:32:38.614270 systemd[2051]: Reached target timers.target - Timers. Jan 30 15:32:38.631736 systemd[2051]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:32:38.634848 systemd[2051]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:32:38.634875 systemd[2051]: Reached target sockets.target - Sockets. Jan 30 15:32:38.634884 systemd[2051]: Reached target basic.target - Basic System. Jan 30 15:32:38.634904 systemd[2051]: Reached target default.target - Main User Target. Jan 30 15:32:38.634918 systemd[2051]: Startup finished in 119ms. Jan 30 15:32:38.635085 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:32:38.645634 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:32:38.723818 systemd[1]: Started sshd@1-139.178.70.183:22-147.75.109.163:48612.service - OpenSSH per-connection server daemon (147.75.109.163:48612). Jan 30 15:32:38.756808 sshd[2063]: Accepted publickey for core from 147.75.109.163 port 48612 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:38.757458 sshd[2063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:38.760052 systemd-logind[1895]: New session 2 of user core. Jan 30 15:32:38.777737 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:32:38.786890 coreos-metadata[1866]: Jan 30 15:32:38.786 INFO Fetch successful Jan 30 15:32:38.834951 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:32:38.835203 sshd[2063]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:38.846191 systemd[1]: sshd@1-139.178.70.183:22-147.75.109.163:48612.service: Deactivated successfully. Jan 30 15:32:38.847053 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 15:32:38.847819 systemd-logind[1895]: Session 2 logged out. Waiting for processes to exit. Jan 30 15:32:38.849853 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 30 15:32:38.860636 systemd[1]: Started sshd@2-139.178.70.183:22-147.75.109.163:48628.service - OpenSSH per-connection server daemon (147.75.109.163:48628). Jan 30 15:32:38.872849 systemd-logind[1895]: Removed session 2. Jan 30 15:32:38.898848 sshd[2080]: Accepted publickey for core from 147.75.109.163 port 48628 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:38.899793 sshd[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:38.903406 systemd-logind[1895]: New session 3 of user core. Jan 30 15:32:38.911878 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:32:38.930092 coreos-metadata[1955]: Jan 30 15:32:38.930 INFO Fetch successful Jan 30 15:32:38.979111 sshd[2080]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:38.981680 systemd[1]: sshd@2-139.178.70.183:22-147.75.109.163:48628.service: Deactivated successfully. Jan 30 15:32:38.984270 systemd-logind[1895]: Session 3 logged out. Waiting for processes to exit. Jan 30 15:32:38.984527 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 15:32:38.985455 systemd-logind[1895]: Removed session 3. Jan 30 15:32:39.009830 unknown[1955]: wrote ssh authorized keys file for user: core Jan 30 15:32:39.045052 update-ssh-keys[2089]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:32:39.045325 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:32:39.057633 systemd[1]: Finished sshkeys.service. Jan 30 15:32:39.200769 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 30 15:32:39.212342 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:32:39.221946 systemd[1]: Startup finished in 23.882s (kernel) + 9.647s (userspace) = 33.530s. Jan 30 15:32:39.237886 login[1982]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:32:39.240741 systemd-logind[1895]: New session 4 of user core. Jan 30 15:32:39.241512 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:32:39.254968 login[1977]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:32:39.257914 systemd-logind[1895]: New session 5 of user core. Jan 30 15:32:39.258431 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:32:47.446194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:32:47.463776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:32:47.665370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:32:47.668080 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:32:47.696050 kubelet[2136]: E0130 15:32:47.695996 2136 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:32:47.699216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:32:47.699356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:32:48.994913 systemd[1]: Started sshd@3-139.178.70.183:22-147.75.109.163:48218.service - OpenSSH per-connection server daemon (147.75.109.163:48218). Jan 30 15:32:49.022968 sshd[2158]: Accepted publickey for core from 147.75.109.163 port 48218 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:49.023612 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:49.026329 systemd-logind[1895]: New session 6 of user core. Jan 30 15:32:49.046047 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:32:49.098818 sshd[2158]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:49.107839 systemd[1]: Started sshd@4-139.178.70.183:22-147.75.109.163:48228.service - OpenSSH per-connection server daemon (147.75.109.163:48228). Jan 30 15:32:49.108143 systemd[1]: sshd@3-139.178.70.183:22-147.75.109.163:48218.service: Deactivated successfully. Jan 30 15:32:49.108914 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:32:49.109385 systemd-logind[1895]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:32:49.110280 systemd-logind[1895]: Removed session 6. Jan 30 15:32:49.134971 sshd[2164]: Accepted publickey for core from 147.75.109.163 port 48228 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:49.135641 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:49.138224 systemd-logind[1895]: New session 7 of user core. Jan 30 15:32:49.150810 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:32:49.201236 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:49.218829 systemd[1]: Started sshd@5-139.178.70.183:22-147.75.109.163:48236.service - OpenSSH per-connection server daemon (147.75.109.163:48236). Jan 30 15:32:49.219308 systemd[1]: sshd@4-139.178.70.183:22-147.75.109.163:48228.service: Deactivated successfully. Jan 30 15:32:49.220190 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:32:49.220739 systemd-logind[1895]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:32:49.221492 systemd-logind[1895]: Removed session 7. Jan 30 15:32:49.254210 sshd[2172]: Accepted publickey for core from 147.75.109.163 port 48236 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:49.255062 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:49.258407 systemd-logind[1895]: New session 8 of user core. Jan 30 15:32:49.266831 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:32:49.324142 sshd[2172]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:49.341287 systemd[1]: Started sshd@6-139.178.70.183:22-147.75.109.163:48252.service - OpenSSH per-connection server daemon (147.75.109.163:48252). Jan 30 15:32:49.343496 systemd[1]: sshd@5-139.178.70.183:22-147.75.109.163:48236.service: Deactivated successfully. Jan 30 15:32:49.347717 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:32:49.349800 systemd-logind[1895]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:32:49.352820 systemd-logind[1895]: Removed session 8. Jan 30 15:32:49.408402 sshd[2180]: Accepted publickey for core from 147.75.109.163 port 48252 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:49.409192 sshd[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:49.412221 systemd-logind[1895]: New session 9 of user core. Jan 30 15:32:49.425832 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:32:49.488700 sudo[2186]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 15:32:49.488852 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:32:49.498182 sudo[2186]: pam_unix(sudo:session): session closed for user root Jan 30 15:32:49.499253 sshd[2180]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:49.510855 systemd[1]: Started sshd@7-139.178.70.183:22-147.75.109.163:48268.service - OpenSSH per-connection server daemon (147.75.109.163:48268). Jan 30 15:32:49.511428 systemd[1]: sshd@6-139.178.70.183:22-147.75.109.163:48252.service: Deactivated successfully. Jan 30 15:32:49.512409 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:32:49.512959 systemd-logind[1895]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:32:49.513834 systemd-logind[1895]: Removed session 9. Jan 30 15:32:49.546765 sshd[2189]: Accepted publickey for core from 147.75.109.163 port 48268 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:49.547674 sshd[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:49.550807 systemd-logind[1895]: New session 10 of user core. Jan 30 15:32:49.567880 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:32:49.621406 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 15:32:49.621562 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:32:49.623685 sudo[2196]: pam_unix(sudo:session): session closed for user root Jan 30 15:32:49.626295 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 15:32:49.626445 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:32:49.641840 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 15:32:49.643068 auditctl[2199]: No rules Jan 30 15:32:49.643299 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:32:49.643467 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 15:32:49.645188 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:32:49.661458 augenrules[2218]: No rules Jan 30 15:32:49.661881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:32:49.662524 sudo[2195]: pam_unix(sudo:session): session closed for user root Jan 30 15:32:49.663613 sshd[2189]: pam_unix(sshd:session): session closed for user core Jan 30 15:32:49.680866 systemd[1]: Started sshd@8-139.178.70.183:22-147.75.109.163:48284.service - OpenSSH per-connection server daemon (147.75.109.163:48284). Jan 30 15:32:49.681570 systemd[1]: sshd@7-139.178.70.183:22-147.75.109.163:48268.service: Deactivated successfully. Jan 30 15:32:49.682822 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:32:49.683467 systemd-logind[1895]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:32:49.684468 systemd-logind[1895]: Removed session 10. Jan 30 15:32:49.729595 sshd[2225]: Accepted publickey for core from 147.75.109.163 port 48284 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:32:49.730972 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:32:49.735893 systemd-logind[1895]: New session 11 of user core. Jan 30 15:32:49.749928 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:32:49.804268 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:32:49.804418 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:32:50.084849 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 15:32:50.084939 (dockerd)[2256]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 15:32:50.333311 dockerd[2256]: time="2025-01-30T15:32:50.333252232Z" level=info msg="Starting up" Jan 30 15:32:50.571506 dockerd[2256]: time="2025-01-30T15:32:50.571450370Z" level=info msg="Loading containers: start." Jan 30 15:32:50.666609 kernel: Initializing XFRM netlink socket Jan 30 15:32:50.731367 systemd-networkd[1556]: docker0: Link UP Jan 30 15:32:50.743457 dockerd[2256]: time="2025-01-30T15:32:50.743413398Z" level=info msg="Loading containers: done." Jan 30 15:32:50.752456 dockerd[2256]: time="2025-01-30T15:32:50.752402977Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 15:32:50.752456 dockerd[2256]: time="2025-01-30T15:32:50.752457387Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 15:32:50.752556 dockerd[2256]: time="2025-01-30T15:32:50.752507687Z" level=info msg="Daemon has completed initialization" Jan 30 15:32:50.766440 dockerd[2256]: time="2025-01-30T15:32:50.766389846Z" level=info msg="API listen on /run/docker.sock" Jan 30 15:32:50.766500 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 15:32:51.561825 containerd[1908]: time="2025-01-30T15:32:51.561777514Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 15:32:52.080572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255448642.mount: Deactivated successfully. Jan 30 15:32:52.964515 containerd[1908]: time="2025-01-30T15:32:52.964485168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:52.964818 containerd[1908]: time="2025-01-30T15:32:52.964660285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 15:32:52.965253 containerd[1908]: time="2025-01-30T15:32:52.965235440Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:52.967274 containerd[1908]: time="2025-01-30T15:32:52.967258815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:52.967836 containerd[1908]: time="2025-01-30T15:32:52.967821768Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.406022097s" Jan 30 15:32:52.967874 containerd[1908]: time="2025-01-30T15:32:52.967839294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 15:32:52.979023 containerd[1908]: time="2025-01-30T15:32:52.978968078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 15:32:54.189248 containerd[1908]: time="2025-01-30T15:32:54.189197392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:54.189470 containerd[1908]: time="2025-01-30T15:32:54.189375334Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 15:32:54.189970 containerd[1908]: time="2025-01-30T15:32:54.189919565Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:54.191518 containerd[1908]: time="2025-01-30T15:32:54.191477056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:54.192146 containerd[1908]: time="2025-01-30T15:32:54.192096730Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.213076041s" Jan 30 15:32:54.192146 containerd[1908]: time="2025-01-30T15:32:54.192111958Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 15:32:54.203683 containerd[1908]: time="2025-01-30T15:32:54.203621312Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 15:32:55.056726 containerd[1908]: time="2025-01-30T15:32:55.056672122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:55.056915 containerd[1908]: time="2025-01-30T15:32:55.056864140Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 15:32:55.057274 containerd[1908]: time="2025-01-30T15:32:55.057235496Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:55.058830 containerd[1908]: time="2025-01-30T15:32:55.058780569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:55.059543 containerd[1908]: time="2025-01-30T15:32:55.059494304Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 855.85115ms" Jan 30 15:32:55.059543 containerd[1908]: time="2025-01-30T15:32:55.059511739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 15:32:55.070388 containerd[1908]: time="2025-01-30T15:32:55.070368615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 15:32:55.805723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208201503.mount: Deactivated successfully. Jan 30 15:32:55.982205 containerd[1908]: time="2025-01-30T15:32:55.982178920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:55.982427 containerd[1908]: time="2025-01-30T15:32:55.982320686Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 15:32:55.982769 containerd[1908]: time="2025-01-30T15:32:55.982733749Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:55.983689 containerd[1908]: time="2025-01-30T15:32:55.983646866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:55.984064 containerd[1908]: time="2025-01-30T15:32:55.984024872Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 913.635203ms" Jan 30 15:32:55.984064 containerd[1908]: time="2025-01-30T15:32:55.984040215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 15:32:55.994782 containerd[1908]: time="2025-01-30T15:32:55.994747055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 15:32:56.486517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908626079.mount: Deactivated successfully. Jan 30 15:32:56.995475 containerd[1908]: time="2025-01-30T15:32:56.995450813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:56.995718 containerd[1908]: time="2025-01-30T15:32:56.995698213Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 15:32:56.996043 containerd[1908]: time="2025-01-30T15:32:56.996032963Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:56.998119 containerd[1908]: time="2025-01-30T15:32:56.998075472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:56.998596 containerd[1908]: time="2025-01-30T15:32:56.998573217Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.003805021s" Jan 30 15:32:56.998596 containerd[1908]: time="2025-01-30T15:32:56.998589094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 15:32:57.009699 containerd[1908]: time="2025-01-30T15:32:57.009651664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 15:32:57.495969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990139109.mount: Deactivated successfully. Jan 30 15:32:57.498131 containerd[1908]: time="2025-01-30T15:32:57.498112035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:57.498304 containerd[1908]: time="2025-01-30T15:32:57.498288187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 15:32:57.498703 containerd[1908]: time="2025-01-30T15:32:57.498691599Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:57.500211 containerd[1908]: time="2025-01-30T15:32:57.500198944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:57.500633 containerd[1908]: time="2025-01-30T15:32:57.500621707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 490.95012ms" Jan 30 15:32:57.500663 containerd[1908]: time="2025-01-30T15:32:57.500635952Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 15:32:57.512338 containerd[1908]: time="2025-01-30T15:32:57.512302145Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 15:32:57.944885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:32:57.956690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:32:58.169533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:32:58.170823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142021909.mount: Deactivated successfully. Jan 30 15:32:58.172091 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:32:58.200159 kubelet[2636]: E0130 15:32:58.199810 2636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:32:58.201784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:32:58.201965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:32:59.300520 containerd[1908]: time="2025-01-30T15:32:59.300466246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:59.300752 containerd[1908]: time="2025-01-30T15:32:59.300660891Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 15:32:59.301113 containerd[1908]: time="2025-01-30T15:32:59.301073465Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:59.303073 containerd[1908]: time="2025-01-30T15:32:59.303026007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:32:59.303565 containerd[1908]: time="2025-01-30T15:32:59.303519633Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.791180418s" Jan 30 15:32:59.303565 containerd[1908]: time="2025-01-30T15:32:59.303541779Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 15:33:00.731489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:33:00.740870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:33:00.754261 systemd[1]: Reloading requested from client PID 2854 ('systemctl') (unit session-11.scope)... Jan 30 15:33:00.754269 systemd[1]: Reloading... Jan 30 15:33:00.791591 zram_generator::config[2893]: No configuration found. Jan 30 15:33:00.863154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:33:00.919738 systemd[1]: Reloading finished in 165 ms. Jan 30 15:33:00.957762 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 15:33:00.957800 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 15:33:00.957934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:33:00.959183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:33:01.159521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:33:01.161859 (kubelet)[2973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:33:01.186081 kubelet[2973]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:33:01.186081 kubelet[2973]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:33:01.186081 kubelet[2973]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:33:01.187101 kubelet[2973]: I0130 15:33:01.187055 2973 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:33:01.357020 kubelet[2973]: I0130 15:33:01.357007 2973 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 15:33:01.357020 kubelet[2973]: I0130 15:33:01.357020 2973 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:33:01.357151 kubelet[2973]: I0130 15:33:01.357117 2973 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 15:33:01.370359 kubelet[2973]: I0130 15:33:01.370347 2973 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:33:01.371225 kubelet[2973]: E0130 15:33:01.371218 2973 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.386913 kubelet[2973]: I0130 15:33:01.386875 2973 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:33:01.387121 kubelet[2973]: I0130 15:33:01.387079 2973 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:33:01.387221 kubelet[2973]: I0130 15:33:01.387092 2973 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-8297fae690","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 15:33:01.387221 kubelet[2973]: I0130 15:33:01.387202 2973 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:33:01.387221 kubelet[2973]: I0130 15:33:01.387207 2973 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 15:33:01.387311 kubelet[2973]: I0130 15:33:01.387264 2973 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:33:01.388000 kubelet[2973]: I0130 15:33:01.387960 2973 kubelet.go:400] "Attempting to sync node with API server" Jan 30 15:33:01.388000 kubelet[2973]: I0130 15:33:01.387968 2973 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:33:01.388000 kubelet[2973]: I0130 15:33:01.387979 2973 kubelet.go:312] "Adding apiserver pod source" Jan 30 15:33:01.388000 kubelet[2973]: I0130 15:33:01.387986 2973 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:33:01.390729 kubelet[2973]: W0130 15:33:01.390668 2973 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.390729 kubelet[2973]: E0130 15:33:01.390712 2973 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.390776 kubelet[2973]: W0130 15:33:01.390736 2973 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8297fae690&limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.390776 kubelet[2973]: E0130 15:33:01.390761 2973 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8297fae690&limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.391417 kubelet[2973]: I0130 15:33:01.391408 2973 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:33:01.395799 kubelet[2973]: I0130 15:33:01.395789 2973 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:33:01.395831 kubelet[2973]: W0130 15:33:01.395827 2973 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:33:01.396167 kubelet[2973]: I0130 15:33:01.396159 2973 server.go:1264] "Started kubelet" Jan 30 15:33:01.396227 kubelet[2973]: I0130 15:33:01.396190 2973 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:33:01.396274 kubelet[2973]: I0130 15:33:01.396202 2973 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:33:01.396455 kubelet[2973]: I0130 15:33:01.396447 2973 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:33:01.397019 kubelet[2973]: I0130 15:33:01.396986 2973 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:33:01.397223 kubelet[2973]: I0130 15:33:01.397038 2973 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 15:33:01.397223 kubelet[2973]: I0130 15:33:01.397222 2973 server.go:455] "Adding debug handlers to kubelet server" Jan 30 15:33:01.397566 kubelet[2973]: I0130 15:33:01.397555 2973 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:33:01.397621 kubelet[2973]: I0130 15:33:01.397605 2973 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:33:01.409502 kubelet[2973]: E0130 15:33:01.409415 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:01.409502 kubelet[2973]: I0130 15:33:01.409442 2973 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:33:01.409636 kubelet[2973]: I0130 15:33:01.409514 2973 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:33:01.410101 kubelet[2973]: E0130 15:33:01.410036 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-8297fae690?timeout=10s\": dial tcp 139.178.70.183:6443: connect: connection refused" interval="200ms" Jan 30 15:33:01.410101 kubelet[2973]: W0130 15:33:01.410066 2973 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.410191 kubelet[2973]: E0130 15:33:01.410121 2973 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.413321 kubelet[2973]: E0130 15:33:01.413224 2973 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.183:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-8297fae690.181f823d65850c76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-8297fae690,UID:ci-4081.3.0-a-8297fae690,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-8297fae690,},FirstTimestamp:2025-01-30 15:33:01.39613503 +0000 UTC m=+0.232512494,LastTimestamp:2025-01-30 15:33:01.39613503 +0000 UTC m=+0.232512494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-8297fae690,}" Jan 30 15:33:01.413424 kubelet[2973]: E0130 15:33:01.413415 2973 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:33:01.413459 kubelet[2973]: I0130 15:33:01.413450 2973 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:33:01.417954 kubelet[2973]: I0130 15:33:01.417900 2973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:33:01.418567 kubelet[2973]: I0130 15:33:01.418545 2973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:33:01.418567 kubelet[2973]: I0130 15:33:01.418561 2973 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:33:01.418619 kubelet[2973]: I0130 15:33:01.418573 2973 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 15:33:01.418619 kubelet[2973]: E0130 15:33:01.418600 2973 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:33:01.418865 kubelet[2973]: W0130 15:33:01.418841 2973 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.418897 kubelet[2973]: E0130 15:33:01.418873 2973 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:01.427668 kubelet[2973]: I0130 15:33:01.427658 2973 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:33:01.427668 kubelet[2973]: I0130 15:33:01.427665 2973 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:33:01.427739 kubelet[2973]: I0130 15:33:01.427676 2973 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:33:01.428708 kubelet[2973]: I0130 15:33:01.428698 2973 policy_none.go:49] "None policy: Start" Jan 30 15:33:01.429033 kubelet[2973]: I0130 15:33:01.428992 2973 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:33:01.429033 kubelet[2973]: I0130 15:33:01.429003 2973 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:33:01.431432 kubelet[2973]: I0130 15:33:01.431422 2973 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:33:01.431530 kubelet[2973]: I0130 15:33:01.431514 2973 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:33:01.431579 kubelet[2973]: I0130 15:33:01.431573 2973 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:33:01.431956 kubelet[2973]: E0130 15:33:01.431947 2973 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:01.512550 kubelet[2973]: I0130 15:33:01.512522 2973 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.512819 kubelet[2973]: E0130 15:33:01.512800 2973 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.183:6443/api/v1/nodes\": dial tcp 139.178.70.183:6443: connect: connection refused" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.519164 kubelet[2973]: I0130 15:33:01.519103 2973 topology_manager.go:215] "Topology Admit Handler" podUID="a963f9f4037a83428d0cbe70f5fa5d3b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.522609 kubelet[2973]: I0130 15:33:01.522535 2973 topology_manager.go:215] "Topology Admit Handler" podUID="3b89ea6802b4d2c8440b2fba93cef9ab" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.526376 kubelet[2973]: I0130 15:33:01.526277 2973 topology_manager.go:215] "Topology Admit Handler" podUID="a6e6bcbc92ce45f32e908845b8db726e" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.610941 kubelet[2973]: E0130 15:33:01.610847 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-8297fae690?timeout=10s\": dial tcp 139.178.70.183:6443: connect: connection refused" interval="400ms" Jan 30 15:33:01.610941 kubelet[2973]: I0130 15:33:01.610913 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a963f9f4037a83428d0cbe70f5fa5d3b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8297fae690\" (UID: \"a963f9f4037a83428d0cbe70f5fa5d3b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611314 kubelet[2973]: I0130 15:33:01.611004 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a963f9f4037a83428d0cbe70f5fa5d3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-8297fae690\" (UID: \"a963f9f4037a83428d0cbe70f5fa5d3b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611314 kubelet[2973]: I0130 15:33:01.611111 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611314 kubelet[2973]: I0130 15:33:01.611181 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611314 kubelet[2973]: I0130 15:33:01.611242 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a963f9f4037a83428d0cbe70f5fa5d3b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8297fae690\" (UID: \"a963f9f4037a83428d0cbe70f5fa5d3b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611314 kubelet[2973]: I0130 15:33:01.611293 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611777 kubelet[2973]: I0130 15:33:01.611340 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611777 kubelet[2973]: I0130 15:33:01.611394 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.611777 kubelet[2973]: I0130 15:33:01.611506 2973 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6e6bcbc92ce45f32e908845b8db726e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-8297fae690\" (UID: \"a6e6bcbc92ce45f32e908845b8db726e\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.716797 kubelet[2973]: I0130 15:33:01.716707 2973 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.717462 kubelet[2973]: E0130 15:33:01.717362 2973 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.183:6443/api/v1/nodes\": dial tcp 139.178.70.183:6443: connect: connection refused" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:01.834100 containerd[1908]: time="2025-01-30T15:33:01.833965496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-8297fae690,Uid:a963f9f4037a83428d0cbe70f5fa5d3b,Namespace:kube-system,Attempt:0,}" Jan 30 15:33:01.836237 containerd[1908]: time="2025-01-30T15:33:01.836223231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-8297fae690,Uid:3b89ea6802b4d2c8440b2fba93cef9ab,Namespace:kube-system,Attempt:0,}" Jan 30 15:33:01.838890 containerd[1908]: time="2025-01-30T15:33:01.838862347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-8297fae690,Uid:a6e6bcbc92ce45f32e908845b8db726e,Namespace:kube-system,Attempt:0,}" Jan 30 15:33:02.011607 kubelet[2973]: E0130 15:33:02.011534 2973 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-8297fae690?timeout=10s\": dial tcp 139.178.70.183:6443: connect: connection refused" interval="800ms" Jan 30 15:33:02.119877 kubelet[2973]: I0130 15:33:02.119833 2973 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:02.120064 kubelet[2973]: E0130 15:33:02.120031 2973 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.183:6443/api/v1/nodes\": dial tcp 139.178.70.183:6443: connect: connection refused" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:02.324605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831476725.mount: Deactivated successfully. Jan 30 15:33:02.326298 containerd[1908]: time="2025-01-30T15:33:02.326251229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:33:02.326442 containerd[1908]: time="2025-01-30T15:33:02.326396227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 15:33:02.327332 containerd[1908]: time="2025-01-30T15:33:02.327288596Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:33:02.327905 containerd[1908]: time="2025-01-30T15:33:02.327858727Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:33:02.328173 containerd[1908]: time="2025-01-30T15:33:02.328120111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:33:02.328493 containerd[1908]: time="2025-01-30T15:33:02.328444301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:33:02.328700 containerd[1908]: time="2025-01-30T15:33:02.328659578Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:33:02.329722 containerd[1908]: time="2025-01-30T15:33:02.329679478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:33:02.331118 containerd[1908]: time="2025-01-30T15:33:02.331072793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.169269ms" Jan 30 15:33:02.331931 containerd[1908]: time="2025-01-30T15:33:02.331879990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.627766ms" Jan 30 15:33:02.332980 containerd[1908]: time="2025-01-30T15:33:02.332936349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.799559ms" Jan 30 15:33:02.434977 containerd[1908]: time="2025-01-30T15:33:02.434889508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:02.434977 containerd[1908]: time="2025-01-30T15:33:02.434950626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:02.435299 containerd[1908]: time="2025-01-30T15:33:02.435279172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:02.435356 containerd[1908]: time="2025-01-30T15:33:02.435341706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:02.435489 containerd[1908]: time="2025-01-30T15:33:02.435460649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:02.435523 containerd[1908]: time="2025-01-30T15:33:02.435484655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:02.435550 containerd[1908]: time="2025-01-30T15:33:02.435489097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:02.435550 containerd[1908]: time="2025-01-30T15:33:02.435516526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:02.435594 containerd[1908]: time="2025-01-30T15:33:02.435524022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:02.435782 containerd[1908]: time="2025-01-30T15:33:02.435768142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:02.435804 containerd[1908]: time="2025-01-30T15:33:02.435789852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:02.435835 containerd[1908]: time="2025-01-30T15:33:02.435821980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:02.447627 kubelet[2973]: W0130 15:33:02.447547 2973 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:02.447627 kubelet[2973]: E0130 15:33:02.447628 2973 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:02.478335 containerd[1908]: time="2025-01-30T15:33:02.478310205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-8297fae690,Uid:a963f9f4037a83428d0cbe70f5fa5d3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c6c1c5564652e29b687341548e6c8a8a17157b047d0042527c668c593469c14\"" Jan 30 15:33:02.478422 containerd[1908]: time="2025-01-30T15:33:02.478325202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-8297fae690,Uid:3b89ea6802b4d2c8440b2fba93cef9ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"edec73cf9a034bf83fc790dbc21892c927b6fdd365c54664af7e7104f46a774b\"" Jan 30 15:33:02.478422 containerd[1908]: time="2025-01-30T15:33:02.478374795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-8297fae690,Uid:a6e6bcbc92ce45f32e908845b8db726e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4b548019caf0595456b567b34a51839faf1eca280a50b5c633c1bac405fdfa7\"" Jan 30 15:33:02.480121 containerd[1908]: time="2025-01-30T15:33:02.480106672Z" level=info msg="CreateContainer within sandbox \"b4b548019caf0595456b567b34a51839faf1eca280a50b5c633c1bac405fdfa7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 15:33:02.480323 containerd[1908]: time="2025-01-30T15:33:02.480272031Z" level=info msg="CreateContainer within sandbox \"2c6c1c5564652e29b687341548e6c8a8a17157b047d0042527c668c593469c14\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 15:33:02.480428 containerd[1908]: time="2025-01-30T15:33:02.480410959Z" level=info msg="CreateContainer within sandbox \"edec73cf9a034bf83fc790dbc21892c927b6fdd365c54664af7e7104f46a774b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 15:33:02.486955 containerd[1908]: time="2025-01-30T15:33:02.486913488Z" level=info msg="CreateContainer within sandbox \"b4b548019caf0595456b567b34a51839faf1eca280a50b5c633c1bac405fdfa7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"15957e5ad52b2a4fd07de22877151a90264a72aa09bb612e751ce5a9da010633\"" Jan 30 15:33:02.487192 containerd[1908]: time="2025-01-30T15:33:02.487178865Z" level=info msg="StartContainer for \"15957e5ad52b2a4fd07de22877151a90264a72aa09bb612e751ce5a9da010633\"" Jan 30 15:33:02.488517 containerd[1908]: time="2025-01-30T15:33:02.488476197Z" level=info msg="CreateContainer within sandbox \"edec73cf9a034bf83fc790dbc21892c927b6fdd365c54664af7e7104f46a774b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a8a3d8d4f83c018bb062b48dfd38dd8009fbb416c5b8c5b4d5de25462dce41d8\"" Jan 30 15:33:02.488713 containerd[1908]: time="2025-01-30T15:33:02.488671581Z" level=info msg="CreateContainer within sandbox \"2c6c1c5564652e29b687341548e6c8a8a17157b047d0042527c668c593469c14\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a0f22afb210bcdbde5af44aaa17efaa89c1638283ffd151292b6e3bd0c7c547\"" Jan 30 15:33:02.488743 containerd[1908]: time="2025-01-30T15:33:02.488734100Z" level=info msg="StartContainer for \"a8a3d8d4f83c018bb062b48dfd38dd8009fbb416c5b8c5b4d5de25462dce41d8\"" Jan 30 15:33:02.488939 containerd[1908]: time="2025-01-30T15:33:02.488904096Z" level=info msg="StartContainer for \"7a0f22afb210bcdbde5af44aaa17efaa89c1638283ffd151292b6e3bd0c7c547\"" Jan 30 15:33:02.559342 kubelet[2973]: W0130 15:33:02.559266 2973 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8297fae690&limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:02.559342 kubelet[2973]: E0130 15:33:02.559323 2973 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-8297fae690&limit=500&resourceVersion=0": dial tcp 139.178.70.183:6443: connect: connection refused Jan 30 15:33:02.575756 containerd[1908]: time="2025-01-30T15:33:02.575666451Z" level=info msg="StartContainer for \"a8a3d8d4f83c018bb062b48dfd38dd8009fbb416c5b8c5b4d5de25462dce41d8\" returns successfully" Jan 30 15:33:02.575756 containerd[1908]: time="2025-01-30T15:33:02.575733056Z" level=info msg="StartContainer for \"7a0f22afb210bcdbde5af44aaa17efaa89c1638283ffd151292b6e3bd0c7c547\" returns successfully" Jan 30 15:33:02.575882 containerd[1908]: time="2025-01-30T15:33:02.575666459Z" level=info msg="StartContainer for \"15957e5ad52b2a4fd07de22877151a90264a72aa09bb612e751ce5a9da010633\" returns successfully" Jan 30 15:33:02.921621 kubelet[2973]: I0130 15:33:02.921531 2973 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:03.275503 kubelet[2973]: E0130 15:33:03.275482 2973 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-8297fae690\" not found" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:03.379074 kubelet[2973]: I0130 15:33:03.379031 2973 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:03.383635 kubelet[2973]: E0130 15:33:03.383625 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:03.484210 kubelet[2973]: E0130 15:33:03.484189 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:03.585107 kubelet[2973]: E0130 15:33:03.585025 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:03.685800 kubelet[2973]: E0130 15:33:03.685689 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:03.786923 kubelet[2973]: E0130 15:33:03.786796 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:03.887522 kubelet[2973]: E0130 15:33:03.887336 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:03.988303 kubelet[2973]: E0130 15:33:03.988201 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:04.089566 kubelet[2973]: E0130 15:33:04.089432 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:04.190637 kubelet[2973]: E0130 15:33:04.190379 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:04.291419 kubelet[2973]: E0130 15:33:04.291308 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:04.392039 kubelet[2973]: E0130 15:33:04.391970 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:04.493146 kubelet[2973]: E0130 15:33:04.493044 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:04.594876 kubelet[2973]: E0130 15:33:04.594851 2973 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:05.390049 kubelet[2973]: I0130 15:33:05.389956 2973 apiserver.go:52] "Watching apiserver" Jan 30 15:33:05.410682 kubelet[2973]: I0130 15:33:05.410628 2973 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:33:05.629096 systemd[1]: Reloading requested from client PID 3285 ('systemctl') (unit session-11.scope)... Jan 30 15:33:05.629128 systemd[1]: Reloading... Jan 30 15:33:05.691591 zram_generator::config[3324]: No configuration found. Jan 30 15:33:05.763289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:33:05.823948 systemd[1]: Reloading finished in 193 ms. Jan 30 15:33:05.847781 kubelet[2973]: I0130 15:33:05.847760 2973 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:33:05.847842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:33:05.858070 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:33:05.858231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:33:05.873845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:33:06.068526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:33:06.071048 (kubelet)[3399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:33:06.092294 kubelet[3399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:33:06.092294 kubelet[3399]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:33:06.092294 kubelet[3399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:33:06.092548 kubelet[3399]: I0130 15:33:06.092293 3399 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:33:06.095323 kubelet[3399]: I0130 15:33:06.095283 3399 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 15:33:06.095323 kubelet[3399]: I0130 15:33:06.095293 3399 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:33:06.095396 kubelet[3399]: I0130 15:33:06.095391 3399 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 15:33:06.096530 kubelet[3399]: I0130 15:33:06.096517 3399 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:33:06.097103 kubelet[3399]: I0130 15:33:06.097090 3399 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:33:06.105132 kubelet[3399]: I0130 15:33:06.105122 3399 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:33:06.105361 kubelet[3399]: I0130 15:33:06.105345 3399 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:33:06.105472 kubelet[3399]: I0130 15:33:06.105362 3399 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-8297fae690","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 15:33:06.105557 kubelet[3399]: I0130 15:33:06.105484 3399 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:33:06.105557 kubelet[3399]: I0130 15:33:06.105494 3399 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 15:33:06.105557 kubelet[3399]: I0130 15:33:06.105523 3399 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:33:06.105646 kubelet[3399]: I0130 15:33:06.105590 3399 kubelet.go:400] "Attempting to sync node with API server" Jan 30 15:33:06.105646 kubelet[3399]: I0130 15:33:06.105603 3399 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:33:06.105646 kubelet[3399]: I0130 15:33:06.105620 3399 kubelet.go:312] "Adding apiserver pod source" Jan 30 15:33:06.105646 kubelet[3399]: I0130 15:33:06.105634 3399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:33:06.105890 kubelet[3399]: I0130 15:33:06.105875 3399 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:33:06.105983 kubelet[3399]: I0130 15:33:06.105975 3399 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:33:06.106205 kubelet[3399]: I0130 15:33:06.106197 3399 server.go:1264] "Started kubelet" Jan 30 15:33:06.106249 kubelet[3399]: I0130 15:33:06.106234 3399 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:33:06.106308 kubelet[3399]: I0130 15:33:06.106276 3399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:33:06.106952 kubelet[3399]: I0130 15:33:06.106833 3399 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:33:06.107717 kubelet[3399]: I0130 15:33:06.107709 3399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:33:06.107774 kubelet[3399]: I0130 15:33:06.107762 3399 server.go:455] "Adding debug handlers to kubelet server" Jan 30 15:33:06.107965 kubelet[3399]: I0130 15:33:06.107770 3399 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 15:33:06.107965 kubelet[3399]: E0130 15:33:06.107771 3399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-8297fae690\" not found" Jan 30 15:33:06.107965 kubelet[3399]: I0130 15:33:06.107782 3399 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:33:06.108072 kubelet[3399]: I0130 15:33:06.108008 3399 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:33:06.108072 kubelet[3399]: E0130 15:33:06.108055 3399 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:33:06.108421 kubelet[3399]: I0130 15:33:06.108324 3399 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:33:06.108421 kubelet[3399]: I0130 15:33:06.108388 3399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:33:06.108915 kubelet[3399]: I0130 15:33:06.108905 3399 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:33:06.112615 kubelet[3399]: I0130 15:33:06.112590 3399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:33:06.113137 kubelet[3399]: I0130 15:33:06.113129 3399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:33:06.113157 kubelet[3399]: I0130 15:33:06.113151 3399 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:33:06.113183 kubelet[3399]: I0130 15:33:06.113165 3399 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 15:33:06.113204 kubelet[3399]: E0130 15:33:06.113192 3399 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:33:06.128733 kubelet[3399]: I0130 15:33:06.128720 3399 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:33:06.128733 kubelet[3399]: I0130 15:33:06.128729 3399 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:33:06.128814 kubelet[3399]: I0130 15:33:06.128740 3399 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:33:06.128830 kubelet[3399]: I0130 15:33:06.128825 3399 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 15:33:06.128846 kubelet[3399]: I0130 15:33:06.128831 3399 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 15:33:06.128846 kubelet[3399]: I0130 15:33:06.128842 3399 policy_none.go:49] "None policy: Start" Jan 30 15:33:06.129078 kubelet[3399]: I0130 15:33:06.129072 3399 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:33:06.129101 kubelet[3399]: I0130 15:33:06.129081 3399 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:33:06.129166 kubelet[3399]: I0130 15:33:06.129161 3399 state_mem.go:75] "Updated machine memory state" Jan 30 15:33:06.129717 kubelet[3399]: I0130 15:33:06.129709 3399 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:33:06.129807 kubelet[3399]: I0130 15:33:06.129793 3399 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:33:06.129841 kubelet[3399]: I0130 15:33:06.129836 3399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:33:06.214205 kubelet[3399]: I0130 15:33:06.214076 3399 topology_manager.go:215] "Topology Admit Handler" podUID="a963f9f4037a83428d0cbe70f5fa5d3b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.214443 kubelet[3399]: I0130 15:33:06.214276 3399 topology_manager.go:215] "Topology Admit Handler" podUID="3b89ea6802b4d2c8440b2fba93cef9ab" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.214603 kubelet[3399]: I0130 15:33:06.214440 3399 topology_manager.go:215] "Topology Admit Handler" podUID="a6e6bcbc92ce45f32e908845b8db726e" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.216008 kubelet[3399]: I0130 15:33:06.215916 3399 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.225197 kubelet[3399]: I0130 15:33:06.225110 3399 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.225366 kubelet[3399]: I0130 15:33:06.225277 3399 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.229660 kubelet[3399]: W0130 15:33:06.229606 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:33:06.229888 kubelet[3399]: W0130 15:33:06.229765 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:33:06.230067 kubelet[3399]: W0130 15:33:06.229895 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:33:06.309525 kubelet[3399]: I0130 15:33:06.309406 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a963f9f4037a83428d0cbe70f5fa5d3b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8297fae690\" (UID: \"a963f9f4037a83428d0cbe70f5fa5d3b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.309764 kubelet[3399]: I0130 15:33:06.309567 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.309764 kubelet[3399]: I0130 15:33:06.309688 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.310009 kubelet[3399]: I0130 15:33:06.309779 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.310009 kubelet[3399]: I0130 15:33:06.309873 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.310009 kubelet[3399]: I0130 15:33:06.309972 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b89ea6802b4d2c8440b2fba93cef9ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-8297fae690\" (UID: \"3b89ea6802b4d2c8440b2fba93cef9ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.310279 kubelet[3399]: I0130 15:33:06.310074 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6e6bcbc92ce45f32e908845b8db726e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-8297fae690\" (UID: \"a6e6bcbc92ce45f32e908845b8db726e\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.310279 kubelet[3399]: I0130 15:33:06.310172 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a963f9f4037a83428d0cbe70f5fa5d3b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-8297fae690\" (UID: \"a963f9f4037a83428d0cbe70f5fa5d3b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:06.310476 kubelet[3399]: I0130 15:33:06.310260 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a963f9f4037a83428d0cbe70f5fa5d3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-8297fae690\" (UID: \"a963f9f4037a83428d0cbe70f5fa5d3b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:07.106607 kubelet[3399]: I0130 15:33:07.106503 3399 apiserver.go:52] "Watching apiserver" Jan 30 15:33:07.125419 kubelet[3399]: W0130 15:33:07.125333 3399 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:33:07.125656 kubelet[3399]: E0130 15:33:07.125471 3399 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-8297fae690\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" Jan 30 15:33:07.142474 kubelet[3399]: I0130 15:33:07.142434 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-8297fae690" podStartSLOduration=1.142419004 podStartE2EDuration="1.142419004s" podCreationTimestamp="2025-01-30 15:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:33:07.137485821 +0000 UTC m=+1.064658361" watchObservedRunningTime="2025-01-30 15:33:07.142419004 +0000 UTC m=+1.069591544" Jan 30 15:33:07.146514 kubelet[3399]: I0130 15:33:07.146474 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-8297fae690" podStartSLOduration=1.146465934 podStartE2EDuration="1.146465934s" podCreationTimestamp="2025-01-30 15:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:33:07.142535973 +0000 UTC m=+1.069708515" watchObservedRunningTime="2025-01-30 15:33:07.146465934 +0000 UTC m=+1.073638471" Jan 30 15:33:07.146608 kubelet[3399]: I0130 15:33:07.146557 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-8297fae690" podStartSLOduration=1.146554597 podStartE2EDuration="1.146554597s" podCreationTimestamp="2025-01-30 15:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:33:07.146548725 +0000 UTC m=+1.073721268" watchObservedRunningTime="2025-01-30 15:33:07.146554597 +0000 UTC m=+1.073727134" Jan 30 15:33:07.209368 kubelet[3399]: I0130 15:33:07.209279 3399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:33:10.166797 systemd[1]: Started sshd@9-139.178.70.183:22-218.92.0.207:59260.service - OpenSSH per-connection server daemon (218.92.0.207:59260). Jan 30 15:33:10.315938 sshd[3520]: Unable to negotiate with 218.92.0.207 port 59260: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Jan 30 15:33:10.319496 systemd[1]: sshd@9-139.178.70.183:22-218.92.0.207:59260.service: Deactivated successfully. Jan 30 15:33:10.596101 sudo[2231]: pam_unix(sudo:session): session closed for user root Jan 30 15:33:10.596940 sshd[2225]: pam_unix(sshd:session): session closed for user core Jan 30 15:33:10.598440 systemd[1]: sshd@8-139.178.70.183:22-147.75.109.163:48284.service: Deactivated successfully. Jan 30 15:33:10.599802 systemd-logind[1895]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:33:10.599950 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:33:10.600534 systemd-logind[1895]: Removed session 11. Jan 30 15:33:18.679410 kubelet[3399]: I0130 15:33:18.679360 3399 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 15:33:18.679779 containerd[1908]: time="2025-01-30T15:33:18.679642131Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:33:18.679989 kubelet[3399]: I0130 15:33:18.679818 3399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 15:33:19.742269 kubelet[3399]: I0130 15:33:19.742191 3399 topology_manager.go:215] "Topology Admit Handler" podUID="a854e811-5017-496f-821f-6d53aec8f1c2" podNamespace="kube-system" podName="kube-proxy-kkrvb" Jan 30 15:33:19.809380 kubelet[3399]: I0130 15:33:19.809313 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a854e811-5017-496f-821f-6d53aec8f1c2-kube-proxy\") pod \"kube-proxy-kkrvb\" (UID: \"a854e811-5017-496f-821f-6d53aec8f1c2\") " pod="kube-system/kube-proxy-kkrvb" Jan 30 15:33:19.809624 kubelet[3399]: I0130 15:33:19.809470 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a854e811-5017-496f-821f-6d53aec8f1c2-xtables-lock\") pod \"kube-proxy-kkrvb\" (UID: \"a854e811-5017-496f-821f-6d53aec8f1c2\") " pod="kube-system/kube-proxy-kkrvb" Jan 30 15:33:19.809624 kubelet[3399]: I0130 15:33:19.809560 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a854e811-5017-496f-821f-6d53aec8f1c2-lib-modules\") pod \"kube-proxy-kkrvb\" (UID: \"a854e811-5017-496f-821f-6d53aec8f1c2\") " pod="kube-system/kube-proxy-kkrvb" Jan 30 15:33:19.809778 kubelet[3399]: I0130 15:33:19.809623 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kmh\" (UniqueName: \"kubernetes.io/projected/a854e811-5017-496f-821f-6d53aec8f1c2-kube-api-access-g5kmh\") pod \"kube-proxy-kkrvb\" (UID: \"a854e811-5017-496f-821f-6d53aec8f1c2\") " pod="kube-system/kube-proxy-kkrvb" Jan 30 15:33:19.813124 kubelet[3399]: I0130 15:33:19.813052 3399 topology_manager.go:215] "Topology Admit Handler" podUID="c3b8427b-ab74-4b6a-9681-06ce3821ae2d" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-r2cs2" Jan 30 15:33:19.910605 kubelet[3399]: I0130 15:33:19.910509 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q59qs\" (UniqueName: \"kubernetes.io/projected/c3b8427b-ab74-4b6a-9681-06ce3821ae2d-kube-api-access-q59qs\") pod \"tigera-operator-7bc55997bb-r2cs2\" (UID: \"c3b8427b-ab74-4b6a-9681-06ce3821ae2d\") " pod="tigera-operator/tigera-operator-7bc55997bb-r2cs2" Jan 30 15:33:19.910885 kubelet[3399]: I0130 15:33:19.910679 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3b8427b-ab74-4b6a-9681-06ce3821ae2d-var-lib-calico\") pod \"tigera-operator-7bc55997bb-r2cs2\" (UID: \"c3b8427b-ab74-4b6a-9681-06ce3821ae2d\") " pod="tigera-operator/tigera-operator-7bc55997bb-r2cs2" Jan 30 15:33:20.052423 containerd[1908]: time="2025-01-30T15:33:20.052186002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkrvb,Uid:a854e811-5017-496f-821f-6d53aec8f1c2,Namespace:kube-system,Attempt:0,}" Jan 30 15:33:20.063412 containerd[1908]: time="2025-01-30T15:33:20.063373938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:20.063412 containerd[1908]: time="2025-01-30T15:33:20.063397355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:20.063412 containerd[1908]: time="2025-01-30T15:33:20.063404040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:20.063536 containerd[1908]: time="2025-01-30T15:33:20.063445403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:20.108131 containerd[1908]: time="2025-01-30T15:33:20.108076945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkrvb,Uid:a854e811-5017-496f-821f-6d53aec8f1c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2033a9a2a81fdb84f7cb66ae8097843b2728d2e96ff1de0d9a752aa9fec71616\"" Jan 30 15:33:20.109787 containerd[1908]: time="2025-01-30T15:33:20.109767031Z" level=info msg="CreateContainer within sandbox \"2033a9a2a81fdb84f7cb66ae8097843b2728d2e96ff1de0d9a752aa9fec71616\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:33:20.114969 containerd[1908]: time="2025-01-30T15:33:20.114904647Z" level=info msg="CreateContainer within sandbox \"2033a9a2a81fdb84f7cb66ae8097843b2728d2e96ff1de0d9a752aa9fec71616\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e897548e039a932dd4c86c3703cd24b7039bc829ff15559a32fd3050c686fe0\"" Jan 30 15:33:20.115245 containerd[1908]: time="2025-01-30T15:33:20.115232302Z" level=info msg="StartContainer for \"3e897548e039a932dd4c86c3703cd24b7039bc829ff15559a32fd3050c686fe0\"" Jan 30 15:33:20.118354 containerd[1908]: time="2025-01-30T15:33:20.118334515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r2cs2,Uid:c3b8427b-ab74-4b6a-9681-06ce3821ae2d,Namespace:tigera-operator,Attempt:0,}" Jan 30 15:33:20.128508 containerd[1908]: time="2025-01-30T15:33:20.128471929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:20.128508 containerd[1908]: time="2025-01-30T15:33:20.128496974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:20.128508 containerd[1908]: time="2025-01-30T15:33:20.128503897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:20.128618 containerd[1908]: time="2025-01-30T15:33:20.128552173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:20.162713 containerd[1908]: time="2025-01-30T15:33:20.162692502Z" level=info msg="StartContainer for \"3e897548e039a932dd4c86c3703cd24b7039bc829ff15559a32fd3050c686fe0\" returns successfully" Jan 30 15:33:20.172327 containerd[1908]: time="2025-01-30T15:33:20.172305132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r2cs2,Uid:c3b8427b-ab74-4b6a-9681-06ce3821ae2d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7c0744bb379c69f43b5afb7fe733896eca567636668fed5639de91628c42528e\"" Jan 30 15:33:20.173192 containerd[1908]: time="2025-01-30T15:33:20.173178243Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 15:33:20.701788 update_engine[1901]: I20250130 15:33:20.701653 1901 update_attempter.cc:509] Updating boot flags... Jan 30 15:33:20.740550 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3822) Jan 30 15:33:20.767568 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3768) Jan 30 15:33:20.787569 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3768) Jan 30 15:33:21.157170 kubelet[3399]: I0130 15:33:21.157096 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kkrvb" podStartSLOduration=2.157083038 podStartE2EDuration="2.157083038s" podCreationTimestamp="2025-01-30 15:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:33:21.1568633 +0000 UTC m=+15.084035844" watchObservedRunningTime="2025-01-30 15:33:21.157083038 +0000 UTC m=+15.084255581" Jan 30 15:33:22.080010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225245710.mount: Deactivated successfully. Jan 30 15:33:22.744370 containerd[1908]: time="2025-01-30T15:33:22.744346792Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:22.744614 containerd[1908]: time="2025-01-30T15:33:22.744519667Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 15:33:22.744945 containerd[1908]: time="2025-01-30T15:33:22.744903979Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:22.745913 containerd[1908]: time="2025-01-30T15:33:22.745873269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:22.746412 containerd[1908]: time="2025-01-30T15:33:22.746371354Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.573174227s" Jan 30 15:33:22.746412 containerd[1908]: time="2025-01-30T15:33:22.746387462Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 15:33:22.747336 containerd[1908]: time="2025-01-30T15:33:22.747319478Z" level=info msg="CreateContainer within sandbox \"7c0744bb379c69f43b5afb7fe733896eca567636668fed5639de91628c42528e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 15:33:22.750829 containerd[1908]: time="2025-01-30T15:33:22.750788410Z" level=info msg="CreateContainer within sandbox \"7c0744bb379c69f43b5afb7fe733896eca567636668fed5639de91628c42528e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"65e305fe257f55d558135cb2e8703b1dfb4edf5c8eea990dae413e805a3066da\"" Jan 30 15:33:22.750990 containerd[1908]: time="2025-01-30T15:33:22.750976765Z" level=info msg="StartContainer for \"65e305fe257f55d558135cb2e8703b1dfb4edf5c8eea990dae413e805a3066da\"" Jan 30 15:33:22.814675 containerd[1908]: time="2025-01-30T15:33:22.814649029Z" level=info msg="StartContainer for \"65e305fe257f55d558135cb2e8703b1dfb4edf5c8eea990dae413e805a3066da\" returns successfully" Jan 30 15:33:25.588681 kubelet[3399]: I0130 15:33:25.588381 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-r2cs2" podStartSLOduration=4.0144934 podStartE2EDuration="6.588340183s" podCreationTimestamp="2025-01-30 15:33:19 +0000 UTC" firstStartedPulling="2025-01-30 15:33:20.172909108 +0000 UTC m=+14.100081651" lastFinishedPulling="2025-01-30 15:33:22.746755893 +0000 UTC m=+16.673928434" observedRunningTime="2025-01-30 15:33:23.178682597 +0000 UTC m=+17.105855203" watchObservedRunningTime="2025-01-30 15:33:25.588340183 +0000 UTC m=+19.515512761" Jan 30 15:33:25.589926 kubelet[3399]: I0130 15:33:25.588766 3399 topology_manager.go:215] "Topology Admit Handler" podUID="581924f0-f4cf-4b74-b177-eb813960e71f" podNamespace="calico-system" podName="calico-typha-544b898688-jphw2" Jan 30 15:33:25.617006 kubelet[3399]: I0130 15:33:25.616976 3399 topology_manager.go:215] "Topology Admit Handler" podUID="81f392dc-e806-4dfe-9431-ef446056141c" podNamespace="calico-system" podName="calico-node-r26mq" Jan 30 15:33:25.654759 kubelet[3399]: I0130 15:33:25.654652 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81f392dc-e806-4dfe-9431-ef446056141c-node-certs\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.654759 kubelet[3399]: I0130 15:33:25.654743 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-var-run-calico\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655129 kubelet[3399]: I0130 15:33:25.654795 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-flexvol-driver-host\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655129 kubelet[3399]: I0130 15:33:25.654844 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/581924f0-f4cf-4b74-b177-eb813960e71f-tigera-ca-bundle\") pod \"calico-typha-544b898688-jphw2\" (UID: \"581924f0-f4cf-4b74-b177-eb813960e71f\") " pod="calico-system/calico-typha-544b898688-jphw2" Jan 30 15:33:25.655129 kubelet[3399]: I0130 15:33:25.654891 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-cni-log-dir\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655129 kubelet[3399]: I0130 15:33:25.654937 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f392dc-e806-4dfe-9431-ef446056141c-tigera-ca-bundle\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655129 kubelet[3399]: I0130 15:33:25.654985 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79p8t\" (UniqueName: \"kubernetes.io/projected/81f392dc-e806-4dfe-9431-ef446056141c-kube-api-access-79p8t\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655668 kubelet[3399]: I0130 15:33:25.655031 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzk2\" (UniqueName: \"kubernetes.io/projected/581924f0-f4cf-4b74-b177-eb813960e71f-kube-api-access-2mzk2\") pod \"calico-typha-544b898688-jphw2\" (UID: \"581924f0-f4cf-4b74-b177-eb813960e71f\") " pod="calico-system/calico-typha-544b898688-jphw2" Jan 30 15:33:25.655668 kubelet[3399]: I0130 15:33:25.655072 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-lib-modules\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655668 kubelet[3399]: I0130 15:33:25.655112 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-policysync\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655668 kubelet[3399]: I0130 15:33:25.655185 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-cni-bin-dir\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.655668 kubelet[3399]: I0130 15:33:25.655233 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-xtables-lock\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.656082 kubelet[3399]: I0130 15:33:25.655285 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/581924f0-f4cf-4b74-b177-eb813960e71f-typha-certs\") pod \"calico-typha-544b898688-jphw2\" (UID: \"581924f0-f4cf-4b74-b177-eb813960e71f\") " pod="calico-system/calico-typha-544b898688-jphw2" Jan 30 15:33:25.656082 kubelet[3399]: I0130 15:33:25.655327 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-var-lib-calico\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.656082 kubelet[3399]: I0130 15:33:25.655368 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81f392dc-e806-4dfe-9431-ef446056141c-cni-net-dir\") pod \"calico-node-r26mq\" (UID: \"81f392dc-e806-4dfe-9431-ef446056141c\") " pod="calico-system/calico-node-r26mq" Jan 30 15:33:25.749011 kubelet[3399]: I0130 15:33:25.748912 3399 topology_manager.go:215] "Topology Admit Handler" podUID="5864f7d5-fb06-43dd-b6d9-86f374c2cf41" podNamespace="calico-system" podName="csi-node-driver-bs9r5" Jan 30 15:33:25.750073 kubelet[3399]: E0130 15:33:25.749962 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs9r5" podUID="5864f7d5-fb06-43dd-b6d9-86f374c2cf41" Jan 30 15:33:25.759207 kubelet[3399]: E0130 15:33:25.759154 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.759207 kubelet[3399]: W0130 15:33:25.759201 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.759657 kubelet[3399]: E0130 15:33:25.759256 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.759961 kubelet[3399]: E0130 15:33:25.759897 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.759961 kubelet[3399]: W0130 15:33:25.759940 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.760210 kubelet[3399]: E0130 15:33:25.759987 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.763639 kubelet[3399]: E0130 15:33:25.763590 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.763639 kubelet[3399]: W0130 15:33:25.763631 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.763894 kubelet[3399]: E0130 15:33:25.763671 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.764350 kubelet[3399]: E0130 15:33:25.764320 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.764462 kubelet[3399]: W0130 15:33:25.764351 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.764462 kubelet[3399]: E0130 15:33:25.764379 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.771275 kubelet[3399]: E0130 15:33:25.771249 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.771275 kubelet[3399]: W0130 15:33:25.771271 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.771580 kubelet[3399]: E0130 15:33:25.771300 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.771580 kubelet[3399]: E0130 15:33:25.771532 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.771580 kubelet[3399]: W0130 15:33:25.771552 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.771580 kubelet[3399]: E0130 15:33:25.771565 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.846725 kubelet[3399]: E0130 15:33:25.846639 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.846725 kubelet[3399]: W0130 15:33:25.846652 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.846725 kubelet[3399]: E0130 15:33:25.846667 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.846858 kubelet[3399]: E0130 15:33:25.846843 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.846858 kubelet[3399]: W0130 15:33:25.846850 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.846858 kubelet[3399]: E0130 15:33:25.846857 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847041 kubelet[3399]: E0130 15:33:25.847010 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847041 kubelet[3399]: W0130 15:33:25.847017 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847041 kubelet[3399]: E0130 15:33:25.847024 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847158 kubelet[3399]: E0130 15:33:25.847147 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847158 kubelet[3399]: W0130 15:33:25.847153 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847219 kubelet[3399]: E0130 15:33:25.847160 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847307 kubelet[3399]: E0130 15:33:25.847270 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847307 kubelet[3399]: W0130 15:33:25.847277 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847307 kubelet[3399]: E0130 15:33:25.847283 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847415 kubelet[3399]: E0130 15:33:25.847385 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847415 kubelet[3399]: W0130 15:33:25.847391 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847415 kubelet[3399]: E0130 15:33:25.847398 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847504 kubelet[3399]: E0130 15:33:25.847497 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847504 kubelet[3399]: W0130 15:33:25.847503 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847570 kubelet[3399]: E0130 15:33:25.847510 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847627 kubelet[3399]: E0130 15:33:25.847620 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847658 kubelet[3399]: W0130 15:33:25.847628 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847658 kubelet[3399]: E0130 15:33:25.847634 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847747 kubelet[3399]: E0130 15:33:25.847740 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847747 kubelet[3399]: W0130 15:33:25.847747 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847805 kubelet[3399]: E0130 15:33:25.847753 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847857 kubelet[3399]: E0130 15:33:25.847850 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847884 kubelet[3399]: W0130 15:33:25.847857 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847884 kubelet[3399]: E0130 15:33:25.847863 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.847967 kubelet[3399]: E0130 15:33:25.847960 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.847998 kubelet[3399]: W0130 15:33:25.847967 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.847998 kubelet[3399]: E0130 15:33:25.847973 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.848097 kubelet[3399]: E0130 15:33:25.848090 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.848127 kubelet[3399]: W0130 15:33:25.848097 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.848127 kubelet[3399]: E0130 15:33:25.848105 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.848228 kubelet[3399]: E0130 15:33:25.848221 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.848260 kubelet[3399]: W0130 15:33:25.848228 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.848260 kubelet[3399]: E0130 15:33:25.848235 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.848353 kubelet[3399]: E0130 15:33:25.848345 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.848386 kubelet[3399]: W0130 15:33:25.848352 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.848386 kubelet[3399]: E0130 15:33:25.848359 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.848482 kubelet[3399]: E0130 15:33:25.848475 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.848482 kubelet[3399]: W0130 15:33:25.848482 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.848554 kubelet[3399]: E0130 15:33:25.848489 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.848619 kubelet[3399]: E0130 15:33:25.848611 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.848619 kubelet[3399]: W0130 15:33:25.848618 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.848678 kubelet[3399]: E0130 15:33:25.848625 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.848759 kubelet[3399]: E0130 15:33:25.848752 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.848795 kubelet[3399]: W0130 15:33:25.848760 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.848795 kubelet[3399]: E0130 15:33:25.848767 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.848884 kubelet[3399]: E0130 15:33:25.848876 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.848916 kubelet[3399]: W0130 15:33:25.848883 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.848916 kubelet[3399]: E0130 15:33:25.848892 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.849009 kubelet[3399]: E0130 15:33:25.849001 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.849043 kubelet[3399]: W0130 15:33:25.849008 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.849043 kubelet[3399]: E0130 15:33:25.849015 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.849133 kubelet[3399]: E0130 15:33:25.849126 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.849166 kubelet[3399]: W0130 15:33:25.849133 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.849166 kubelet[3399]: E0130 15:33:25.849140 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.857992 kubelet[3399]: E0130 15:33:25.857914 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.857992 kubelet[3399]: W0130 15:33:25.857950 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.857992 kubelet[3399]: E0130 15:33:25.857982 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.858345 kubelet[3399]: I0130 15:33:25.858041 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5864f7d5-fb06-43dd-b6d9-86f374c2cf41-socket-dir\") pod \"csi-node-driver-bs9r5\" (UID: \"5864f7d5-fb06-43dd-b6d9-86f374c2cf41\") " pod="calico-system/csi-node-driver-bs9r5" Jan 30 15:33:25.858618 kubelet[3399]: E0130 15:33:25.858572 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.858737 kubelet[3399]: W0130 15:33:25.858620 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.858737 kubelet[3399]: E0130 15:33:25.858663 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.858737 kubelet[3399]: I0130 15:33:25.858708 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5864f7d5-fb06-43dd-b6d9-86f374c2cf41-kubelet-dir\") pod \"csi-node-driver-bs9r5\" (UID: \"5864f7d5-fb06-43dd-b6d9-86f374c2cf41\") " pod="calico-system/csi-node-driver-bs9r5" Jan 30 15:33:25.859352 kubelet[3399]: E0130 15:33:25.859282 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.859352 kubelet[3399]: W0130 15:33:25.859316 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.859352 kubelet[3399]: E0130 15:33:25.859356 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.859924 kubelet[3399]: E0130 15:33:25.859855 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.859924 kubelet[3399]: W0130 15:33:25.859888 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.859924 kubelet[3399]: E0130 15:33:25.859928 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.860513 kubelet[3399]: E0130 15:33:25.860476 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.860513 kubelet[3399]: W0130 15:33:25.860516 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.860793 kubelet[3399]: E0130 15:33:25.860570 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.860793 kubelet[3399]: I0130 15:33:25.860626 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptfl\" (UniqueName: \"kubernetes.io/projected/5864f7d5-fb06-43dd-b6d9-86f374c2cf41-kube-api-access-lptfl\") pod \"csi-node-driver-bs9r5\" (UID: \"5864f7d5-fb06-43dd-b6d9-86f374c2cf41\") " pod="calico-system/csi-node-driver-bs9r5" Jan 30 15:33:25.861212 kubelet[3399]: E0130 15:33:25.861171 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.861337 kubelet[3399]: W0130 15:33:25.861213 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.861337 kubelet[3399]: E0130 15:33:25.861257 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.861809 kubelet[3399]: E0130 15:33:25.861777 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.861911 kubelet[3399]: W0130 15:33:25.861812 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.861911 kubelet[3399]: E0130 15:33:25.861850 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.862431 kubelet[3399]: E0130 15:33:25.862395 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.862431 kubelet[3399]: W0130 15:33:25.862429 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.862661 kubelet[3399]: E0130 15:33:25.862469 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.862661 kubelet[3399]: I0130 15:33:25.862523 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5864f7d5-fb06-43dd-b6d9-86f374c2cf41-registration-dir\") pod \"csi-node-driver-bs9r5\" (UID: \"5864f7d5-fb06-43dd-b6d9-86f374c2cf41\") " pod="calico-system/csi-node-driver-bs9r5" Jan 30 15:33:25.863137 kubelet[3399]: E0130 15:33:25.863064 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.863137 kubelet[3399]: W0130 15:33:25.863103 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.863427 kubelet[3399]: E0130 15:33:25.863147 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.863777 kubelet[3399]: E0130 15:33:25.863699 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.863777 kubelet[3399]: W0130 15:33:25.863723 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.863777 kubelet[3399]: E0130 15:33:25.863752 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.864363 kubelet[3399]: E0130 15:33:25.864277 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.864363 kubelet[3399]: W0130 15:33:25.864310 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.864363 kubelet[3399]: E0130 15:33:25.864349 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.864932 kubelet[3399]: E0130 15:33:25.864857 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.864932 kubelet[3399]: W0130 15:33:25.864890 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.865190 kubelet[3399]: E0130 15:33:25.865005 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.865404 kubelet[3399]: E0130 15:33:25.865376 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.865519 kubelet[3399]: W0130 15:33:25.865402 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.865519 kubelet[3399]: E0130 15:33:25.865430 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.865519 kubelet[3399]: I0130 15:33:25.865482 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5864f7d5-fb06-43dd-b6d9-86f374c2cf41-varrun\") pod \"csi-node-driver-bs9r5\" (UID: \"5864f7d5-fb06-43dd-b6d9-86f374c2cf41\") " pod="calico-system/csi-node-driver-bs9r5" Jan 30 15:33:25.866137 kubelet[3399]: E0130 15:33:25.866102 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.866239 kubelet[3399]: W0130 15:33:25.866139 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.866239 kubelet[3399]: E0130 15:33:25.866171 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.866745 kubelet[3399]: E0130 15:33:25.866713 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.866851 kubelet[3399]: W0130 15:33:25.866748 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.866851 kubelet[3399]: E0130 15:33:25.866780 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.895735 containerd[1908]: time="2025-01-30T15:33:25.895605161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-544b898688-jphw2,Uid:581924f0-f4cf-4b74-b177-eb813960e71f,Namespace:calico-system,Attempt:0,}" Jan 30 15:33:25.905750 containerd[1908]: time="2025-01-30T15:33:25.905708117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:25.905750 containerd[1908]: time="2025-01-30T15:33:25.905745004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:25.905846 containerd[1908]: time="2025-01-30T15:33:25.905755974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:25.906092 containerd[1908]: time="2025-01-30T15:33:25.906048867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:25.919908 containerd[1908]: time="2025-01-30T15:33:25.919849068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r26mq,Uid:81f392dc-e806-4dfe-9431-ef446056141c,Namespace:calico-system,Attempt:0,}" Jan 30 15:33:25.929151 containerd[1908]: time="2025-01-30T15:33:25.929110593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:25.929151 containerd[1908]: time="2025-01-30T15:33:25.929141541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:25.929151 containerd[1908]: time="2025-01-30T15:33:25.929151319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:25.929264 containerd[1908]: time="2025-01-30T15:33:25.929191568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:25.942541 containerd[1908]: time="2025-01-30T15:33:25.942508857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r26mq,Uid:81f392dc-e806-4dfe-9431-ef446056141c,Namespace:calico-system,Attempt:0,} returns sandbox id \"cbe43edb72bf6ec4f72ea27cdd83b7968ea586c3e9c11a97d44b4d22fb9b15ff\"" Jan 30 15:33:25.943288 containerd[1908]: time="2025-01-30T15:33:25.943271313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 15:33:25.946403 containerd[1908]: time="2025-01-30T15:33:25.946385502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-544b898688-jphw2,Uid:581924f0-f4cf-4b74-b177-eb813960e71f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b72cf7b008971e7b22898f529dbaa3ea82e079f47bf831826d19046f103252f\"" Jan 30 15:33:25.966772 kubelet[3399]: E0130 15:33:25.966687 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.966772 kubelet[3399]: W0130 15:33:25.966727 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.966772 kubelet[3399]: E0130 15:33:25.966768 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.967458 kubelet[3399]: E0130 15:33:25.967384 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.967458 kubelet[3399]: W0130 15:33:25.967417 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.967773 kubelet[3399]: E0130 15:33:25.967460 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.968174 kubelet[3399]: E0130 15:33:25.968099 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.968174 kubelet[3399]: W0130 15:33:25.968132 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.968174 kubelet[3399]: E0130 15:33:25.968175 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.968889 kubelet[3399]: E0130 15:33:25.968812 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.968889 kubelet[3399]: W0130 15:33:25.968846 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.969174 kubelet[3399]: E0130 15:33:25.968921 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.969427 kubelet[3399]: E0130 15:33:25.969354 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.969427 kubelet[3399]: W0130 15:33:25.969379 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.969728 kubelet[3399]: E0130 15:33:25.969452 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.969897 kubelet[3399]: E0130 15:33:25.969853 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.969897 kubelet[3399]: W0130 15:33:25.969879 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.970081 kubelet[3399]: E0130 15:33:25.969965 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.970424 kubelet[3399]: E0130 15:33:25.970345 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.970424 kubelet[3399]: W0130 15:33:25.970368 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.970736 kubelet[3399]: E0130 15:33:25.970436 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.970994 kubelet[3399]: E0130 15:33:25.970905 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.970994 kubelet[3399]: W0130 15:33:25.970939 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.971251 kubelet[3399]: E0130 15:33:25.971072 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.971442 kubelet[3399]: E0130 15:33:25.971413 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.971442 kubelet[3399]: W0130 15:33:25.971436 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.971661 kubelet[3399]: E0130 15:33:25.971491 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.971952 kubelet[3399]: E0130 15:33:25.971883 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.971952 kubelet[3399]: W0130 15:33:25.971905 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.972183 kubelet[3399]: E0130 15:33:25.971975 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.972447 kubelet[3399]: E0130 15:33:25.972377 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.972447 kubelet[3399]: W0130 15:33:25.972407 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.972743 kubelet[3399]: E0130 15:33:25.972469 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.973015 kubelet[3399]: E0130 15:33:25.972929 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.973015 kubelet[3399]: W0130 15:33:25.972964 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.973248 kubelet[3399]: E0130 15:33:25.973088 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.973485 kubelet[3399]: E0130 15:33:25.973454 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.973485 kubelet[3399]: W0130 15:33:25.973480 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.973708 kubelet[3399]: E0130 15:33:25.973588 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.974040 kubelet[3399]: E0130 15:33:25.974011 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.974040 kubelet[3399]: W0130 15:33:25.974037 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.974232 kubelet[3399]: E0130 15:33:25.974153 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.974602 kubelet[3399]: E0130 15:33:25.974567 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.974716 kubelet[3399]: W0130 15:33:25.974602 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.974804 kubelet[3399]: E0130 15:33:25.974698 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.975113 kubelet[3399]: E0130 15:33:25.975088 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.975221 kubelet[3399]: W0130 15:33:25.975113 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.975309 kubelet[3399]: E0130 15:33:25.975227 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.975606 kubelet[3399]: E0130 15:33:25.975571 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.975606 kubelet[3399]: W0130 15:33:25.975601 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.975810 kubelet[3399]: E0130 15:33:25.975704 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.976088 kubelet[3399]: E0130 15:33:25.976058 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.976088 kubelet[3399]: W0130 15:33:25.976081 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.976273 kubelet[3399]: E0130 15:33:25.976150 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.976637 kubelet[3399]: E0130 15:33:25.976569 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.976637 kubelet[3399]: W0130 15:33:25.976593 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.976907 kubelet[3399]: E0130 15:33:25.976700 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.977195 kubelet[3399]: E0130 15:33:25.977114 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.977195 kubelet[3399]: W0130 15:33:25.977138 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.977568 kubelet[3399]: E0130 15:33:25.977207 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.977701 kubelet[3399]: E0130 15:33:25.977587 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.977701 kubelet[3399]: W0130 15:33:25.977611 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.977874 kubelet[3399]: E0130 15:33:25.977706 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.978163 kubelet[3399]: E0130 15:33:25.978087 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.978163 kubelet[3399]: W0130 15:33:25.978110 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.978163 kubelet[3399]: E0130 15:33:25.978144 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.978557 kubelet[3399]: E0130 15:33:25.978512 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.978665 kubelet[3399]: W0130 15:33:25.978534 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.978665 kubelet[3399]: E0130 15:33:25.978623 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.979155 kubelet[3399]: E0130 15:33:25.979076 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.979155 kubelet[3399]: W0130 15:33:25.979100 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.979155 kubelet[3399]: E0130 15:33:25.979133 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.979674 kubelet[3399]: E0130 15:33:25.979599 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.979674 kubelet[3399]: W0130 15:33:25.979623 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.979674 kubelet[3399]: E0130 15:33:25.979647 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:25.995640 kubelet[3399]: E0130 15:33:25.995535 3399 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 15:33:25.995640 kubelet[3399]: W0130 15:33:25.995595 3399 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 15:33:25.995640 kubelet[3399]: E0130 15:33:25.995637 3399 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 15:33:27.114376 kubelet[3399]: E0130 15:33:27.114317 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs9r5" podUID="5864f7d5-fb06-43dd-b6d9-86f374c2cf41" Jan 30 15:33:27.950583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335802599.mount: Deactivated successfully. Jan 30 15:33:27.987817 containerd[1908]: time="2025-01-30T15:33:27.987792384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:27.988103 containerd[1908]: time="2025-01-30T15:33:27.987992535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 15:33:27.988346 containerd[1908]: time="2025-01-30T15:33:27.988310366Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:27.989664 containerd[1908]: time="2025-01-30T15:33:27.989622344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:27.989937 containerd[1908]: time="2025-01-30T15:33:27.989896717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.046604523s" Jan 30 15:33:27.989937 containerd[1908]: time="2025-01-30T15:33:27.989913272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 15:33:27.990419 containerd[1908]: time="2025-01-30T15:33:27.990408374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 15:33:27.990923 containerd[1908]: time="2025-01-30T15:33:27.990910665Z" level=info msg="CreateContainer within sandbox \"cbe43edb72bf6ec4f72ea27cdd83b7968ea586c3e9c11a97d44b4d22fb9b15ff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 15:33:27.995843 containerd[1908]: time="2025-01-30T15:33:27.995795554Z" level=info msg="CreateContainer within sandbox \"cbe43edb72bf6ec4f72ea27cdd83b7968ea586c3e9c11a97d44b4d22fb9b15ff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8c79c38a1a5293cdb5b977a5c5d081fff07a0a959696a131f7d9dede4067f410\"" Jan 30 15:33:27.996018 containerd[1908]: time="2025-01-30T15:33:27.996004941Z" level=info msg="StartContainer for \"8c79c38a1a5293cdb5b977a5c5d081fff07a0a959696a131f7d9dede4067f410\"" Jan 30 15:33:28.034721 containerd[1908]: time="2025-01-30T15:33:28.034700420Z" level=info msg="StartContainer for \"8c79c38a1a5293cdb5b977a5c5d081fff07a0a959696a131f7d9dede4067f410\" returns successfully" Jan 30 15:33:28.677805 containerd[1908]: time="2025-01-30T15:33:28.677757796Z" level=info msg="shim disconnected" id=8c79c38a1a5293cdb5b977a5c5d081fff07a0a959696a131f7d9dede4067f410 namespace=k8s.io Jan 30 15:33:28.677805 containerd[1908]: time="2025-01-30T15:33:28.677805710Z" level=warning msg="cleaning up after shim disconnected" id=8c79c38a1a5293cdb5b977a5c5d081fff07a0a959696a131f7d9dede4067f410 namespace=k8s.io Jan 30 15:33:28.677930 containerd[1908]: time="2025-01-30T15:33:28.677811622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:33:28.938549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c79c38a1a5293cdb5b977a5c5d081fff07a0a959696a131f7d9dede4067f410-rootfs.mount: Deactivated successfully. Jan 30 15:33:29.114759 kubelet[3399]: E0130 15:33:29.114667 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs9r5" podUID="5864f7d5-fb06-43dd-b6d9-86f374c2cf41" Jan 30 15:33:29.665655 containerd[1908]: time="2025-01-30T15:33:29.665628958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:29.665951 containerd[1908]: time="2025-01-30T15:33:29.665843277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 15:33:29.666233 containerd[1908]: time="2025-01-30T15:33:29.666220486Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:29.667157 containerd[1908]: time="2025-01-30T15:33:29.667144191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:29.667572 containerd[1908]: time="2025-01-30T15:33:29.667556686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.677132475s" Jan 30 15:33:29.667617 containerd[1908]: time="2025-01-30T15:33:29.667573675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 15:33:29.668073 containerd[1908]: time="2025-01-30T15:33:29.668037869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 15:33:29.671045 containerd[1908]: time="2025-01-30T15:33:29.671022414Z" level=info msg="CreateContainer within sandbox \"7b72cf7b008971e7b22898f529dbaa3ea82e079f47bf831826d19046f103252f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 15:33:29.676560 containerd[1908]: time="2025-01-30T15:33:29.676508668Z" level=info msg="CreateContainer within sandbox \"7b72cf7b008971e7b22898f529dbaa3ea82e079f47bf831826d19046f103252f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a5702cd83b693d67df096b722ffba00af66d4f1f50259f12512cc1b8b706bd1\"" Jan 30 15:33:29.676749 containerd[1908]: time="2025-01-30T15:33:29.676712281Z" level=info msg="StartContainer for \"6a5702cd83b693d67df096b722ffba00af66d4f1f50259f12512cc1b8b706bd1\"" Jan 30 15:33:29.722401 containerd[1908]: time="2025-01-30T15:33:29.722379430Z" level=info msg="StartContainer for \"6a5702cd83b693d67df096b722ffba00af66d4f1f50259f12512cc1b8b706bd1\" returns successfully" Jan 30 15:33:30.213175 kubelet[3399]: I0130 15:33:30.213030 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-544b898688-jphw2" podStartSLOduration=1.4918701730000001 podStartE2EDuration="5.212980551s" podCreationTimestamp="2025-01-30 15:33:25 +0000 UTC" firstStartedPulling="2025-01-30 15:33:25.946868186 +0000 UTC m=+19.874040726" lastFinishedPulling="2025-01-30 15:33:29.667978563 +0000 UTC m=+23.595151104" observedRunningTime="2025-01-30 15:33:30.212846749 +0000 UTC m=+24.140019352" watchObservedRunningTime="2025-01-30 15:33:30.212980551 +0000 UTC m=+24.140153191" Jan 30 15:33:31.114376 kubelet[3399]: E0130 15:33:31.114349 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bs9r5" podUID="5864f7d5-fb06-43dd-b6d9-86f374c2cf41" Jan 30 15:33:31.195160 kubelet[3399]: I0130 15:33:31.195142 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:33:32.057701 containerd[1908]: time="2025-01-30T15:33:32.057675250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:32.057941 containerd[1908]: time="2025-01-30T15:33:32.057921192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 15:33:32.058357 containerd[1908]: time="2025-01-30T15:33:32.058342973Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:32.059338 containerd[1908]: time="2025-01-30T15:33:32.059327890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:32.059826 containerd[1908]: time="2025-01-30T15:33:32.059781002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.391725784s" Jan 30 15:33:32.059826 containerd[1908]: time="2025-01-30T15:33:32.059798609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 15:33:32.060849 containerd[1908]: time="2025-01-30T15:33:32.060809466Z" level=info msg="CreateContainer within sandbox \"cbe43edb72bf6ec4f72ea27cdd83b7968ea586c3e9c11a97d44b4d22fb9b15ff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 15:33:32.067284 containerd[1908]: time="2025-01-30T15:33:32.067238384Z" level=info msg="CreateContainer within sandbox \"cbe43edb72bf6ec4f72ea27cdd83b7968ea586c3e9c11a97d44b4d22fb9b15ff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"88db8c6abaf024b0c3b5ed5564a31945f285b24a1e2388a80c06eee68b60a327\"" Jan 30 15:33:32.067488 containerd[1908]: time="2025-01-30T15:33:32.067451296Z" level=info msg="StartContainer for \"88db8c6abaf024b0c3b5ed5564a31945f285b24a1e2388a80c06eee68b60a327\"" Jan 30 15:33:32.096964 containerd[1908]: time="2025-01-30T15:33:32.096941162Z" level=info msg="StartContainer for \"88db8c6abaf024b0c3b5ed5564a31945f285b24a1e2388a80c06eee68b60a327\" returns successfully" Jan 30 15:33:32.650516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88db8c6abaf024b0c3b5ed5564a31945f285b24a1e2388a80c06eee68b60a327-rootfs.mount: Deactivated successfully. Jan 30 15:33:32.716074 kubelet[3399]: I0130 15:33:32.715984 3399 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 15:33:32.752282 kubelet[3399]: I0130 15:33:32.752203 3399 topology_manager.go:215] "Topology Admit Handler" podUID="47aaec61-dc45-4322-ae57-2b2017382ed5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9scnd" Jan 30 15:33:32.753316 kubelet[3399]: I0130 15:33:32.753247 3399 topology_manager.go:215] "Topology Admit Handler" podUID="d99c3914-c98d-41fc-8f33-a1ffbbccba09" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bzqcg" Jan 30 15:33:32.754224 kubelet[3399]: I0130 15:33:32.754159 3399 topology_manager.go:215] "Topology Admit Handler" podUID="a950d8fb-24c2-4d89-81c7-1f97e95d8e16" podNamespace="calico-apiserver" podName="calico-apiserver-684d4dff56-gv4f8" Jan 30 15:33:32.755249 kubelet[3399]: I0130 15:33:32.755148 3399 topology_manager.go:215] "Topology Admit Handler" podUID="12625ef2-af4d-498a-be42-4bc310bbd487" podNamespace="calico-system" podName="calico-kube-controllers-745d85c999-h9vmg" Jan 30 15:33:32.756598 kubelet[3399]: I0130 15:33:32.756492 3399 topology_manager.go:215] "Topology Admit Handler" podUID="3efa98ef-abd2-46af-8ebe-522bd24dc469" podNamespace="calico-apiserver" podName="calico-apiserver-684d4dff56-bv6pr" Jan 30 15:33:32.816152 kubelet[3399]: I0130 15:33:32.816068 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p78z\" (UniqueName: \"kubernetes.io/projected/d99c3914-c98d-41fc-8f33-a1ffbbccba09-kube-api-access-9p78z\") pod \"coredns-7db6d8ff4d-bzqcg\" (UID: \"d99c3914-c98d-41fc-8f33-a1ffbbccba09\") " pod="kube-system/coredns-7db6d8ff4d-bzqcg" Jan 30 15:33:32.816457 kubelet[3399]: I0130 15:33:32.816199 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a950d8fb-24c2-4d89-81c7-1f97e95d8e16-calico-apiserver-certs\") pod \"calico-apiserver-684d4dff56-gv4f8\" (UID: \"a950d8fb-24c2-4d89-81c7-1f97e95d8e16\") " pod="calico-apiserver/calico-apiserver-684d4dff56-gv4f8" Jan 30 15:33:32.816457 kubelet[3399]: I0130 15:33:32.816301 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8bfs\" (UniqueName: \"kubernetes.io/projected/3efa98ef-abd2-46af-8ebe-522bd24dc469-kube-api-access-c8bfs\") pod \"calico-apiserver-684d4dff56-bv6pr\" (UID: \"3efa98ef-abd2-46af-8ebe-522bd24dc469\") " pod="calico-apiserver/calico-apiserver-684d4dff56-bv6pr" Jan 30 15:33:32.816457 kubelet[3399]: I0130 15:33:32.816392 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54zw\" (UniqueName: \"kubernetes.io/projected/47aaec61-dc45-4322-ae57-2b2017382ed5-kube-api-access-t54zw\") pod \"coredns-7db6d8ff4d-9scnd\" (UID: \"47aaec61-dc45-4322-ae57-2b2017382ed5\") " pod="kube-system/coredns-7db6d8ff4d-9scnd" Jan 30 15:33:32.816886 kubelet[3399]: I0130 15:33:32.816485 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12625ef2-af4d-498a-be42-4bc310bbd487-tigera-ca-bundle\") pod \"calico-kube-controllers-745d85c999-h9vmg\" (UID: \"12625ef2-af4d-498a-be42-4bc310bbd487\") " pod="calico-system/calico-kube-controllers-745d85c999-h9vmg" Jan 30 15:33:32.816886 kubelet[3399]: I0130 15:33:32.816630 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3efa98ef-abd2-46af-8ebe-522bd24dc469-calico-apiserver-certs\") pod \"calico-apiserver-684d4dff56-bv6pr\" (UID: \"3efa98ef-abd2-46af-8ebe-522bd24dc469\") " pod="calico-apiserver/calico-apiserver-684d4dff56-bv6pr" Jan 30 15:33:32.816886 kubelet[3399]: I0130 15:33:32.816721 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99c3914-c98d-41fc-8f33-a1ffbbccba09-config-volume\") pod \"coredns-7db6d8ff4d-bzqcg\" (UID: \"d99c3914-c98d-41fc-8f33-a1ffbbccba09\") " pod="kube-system/coredns-7db6d8ff4d-bzqcg" Jan 30 15:33:32.816886 kubelet[3399]: I0130 15:33:32.816825 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47aaec61-dc45-4322-ae57-2b2017382ed5-config-volume\") pod \"coredns-7db6d8ff4d-9scnd\" (UID: \"47aaec61-dc45-4322-ae57-2b2017382ed5\") " pod="kube-system/coredns-7db6d8ff4d-9scnd" Jan 30 15:33:32.817319 kubelet[3399]: I0130 15:33:32.816928 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjd25\" (UniqueName: \"kubernetes.io/projected/a950d8fb-24c2-4d89-81c7-1f97e95d8e16-kube-api-access-jjd25\") pod \"calico-apiserver-684d4dff56-gv4f8\" (UID: \"a950d8fb-24c2-4d89-81c7-1f97e95d8e16\") " pod="calico-apiserver/calico-apiserver-684d4dff56-gv4f8" Jan 30 15:33:32.817319 kubelet[3399]: I0130 15:33:32.817016 3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qv55\" (UniqueName: \"kubernetes.io/projected/12625ef2-af4d-498a-be42-4bc310bbd487-kube-api-access-7qv55\") pod \"calico-kube-controllers-745d85c999-h9vmg\" (UID: \"12625ef2-af4d-498a-be42-4bc310bbd487\") " pod="calico-system/calico-kube-controllers-745d85c999-h9vmg" Jan 30 15:33:33.062416 containerd[1908]: time="2025-01-30T15:33:33.062325937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9scnd,Uid:47aaec61-dc45-4322-ae57-2b2017382ed5,Namespace:kube-system,Attempt:0,}" Jan 30 15:33:33.063504 containerd[1908]: time="2025-01-30T15:33:33.063122436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bzqcg,Uid:d99c3914-c98d-41fc-8f33-a1ffbbccba09,Namespace:kube-system,Attempt:0,}" Jan 30 15:33:33.066620 containerd[1908]: time="2025-01-30T15:33:33.066507659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-gv4f8,Uid:a950d8fb-24c2-4d89-81c7-1f97e95d8e16,Namespace:calico-apiserver,Attempt:0,}" Jan 30 15:33:33.067322 containerd[1908]: time="2025-01-30T15:33:33.067241554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745d85c999-h9vmg,Uid:12625ef2-af4d-498a-be42-4bc310bbd487,Namespace:calico-system,Attempt:0,}" Jan 30 15:33:33.070568 containerd[1908]: time="2025-01-30T15:33:33.070474057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-bv6pr,Uid:3efa98ef-abd2-46af-8ebe-522bd24dc469,Namespace:calico-apiserver,Attempt:0,}" Jan 30 15:33:33.114522 containerd[1908]: time="2025-01-30T15:33:33.114510005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs9r5,Uid:5864f7d5-fb06-43dd-b6d9-86f374c2cf41,Namespace:calico-system,Attempt:0,}" Jan 30 15:33:33.301470 containerd[1908]: time="2025-01-30T15:33:33.301437758Z" level=info msg="shim disconnected" id=88db8c6abaf024b0c3b5ed5564a31945f285b24a1e2388a80c06eee68b60a327 namespace=k8s.io Jan 30 15:33:33.301470 containerd[1908]: time="2025-01-30T15:33:33.301467461Z" level=warning msg="cleaning up after shim disconnected" id=88db8c6abaf024b0c3b5ed5564a31945f285b24a1e2388a80c06eee68b60a327 namespace=k8s.io Jan 30 15:33:33.301470 containerd[1908]: time="2025-01-30T15:33:33.301473031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:33:33.342073 containerd[1908]: time="2025-01-30T15:33:33.341998907Z" level=error msg="Failed to destroy network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.342210 containerd[1908]: time="2025-01-30T15:33:33.342196824Z" level=error msg="encountered an error cleaning up failed sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.342244 containerd[1908]: time="2025-01-30T15:33:33.342231533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9scnd,Uid:47aaec61-dc45-4322-ae57-2b2017382ed5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.342423 kubelet[3399]: E0130 15:33:33.342395 3399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.342479 kubelet[3399]: E0130 15:33:33.342451 3399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9scnd" Jan 30 15:33:33.342479 kubelet[3399]: E0130 15:33:33.342470 3399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9scnd" Jan 30 15:33:33.342523 kubelet[3399]: E0130 15:33:33.342499 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9scnd_kube-system(47aaec61-dc45-4322-ae57-2b2017382ed5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9scnd_kube-system(47aaec61-dc45-4322-ae57-2b2017382ed5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9scnd" podUID="47aaec61-dc45-4322-ae57-2b2017382ed5" Jan 30 15:33:33.343000 containerd[1908]: time="2025-01-30T15:33:33.342974086Z" level=error msg="Failed to destroy network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343156 containerd[1908]: time="2025-01-30T15:33:33.343143043Z" level=error msg="encountered an error cleaning up failed sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343180 containerd[1908]: time="2025-01-30T15:33:33.343170455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bzqcg,Uid:d99c3914-c98d-41fc-8f33-a1ffbbccba09,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343302 kubelet[3399]: E0130 15:33:33.343266 3399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343354 kubelet[3399]: E0130 15:33:33.343302 3399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bzqcg" Jan 30 15:33:33.343354 kubelet[3399]: E0130 15:33:33.343321 3399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bzqcg" Jan 30 15:33:33.343432 kubelet[3399]: E0130 15:33:33.343350 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bzqcg_kube-system(d99c3914-c98d-41fc-8f33-a1ffbbccba09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bzqcg_kube-system(d99c3914-c98d-41fc-8f33-a1ffbbccba09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bzqcg" podUID="d99c3914-c98d-41fc-8f33-a1ffbbccba09" Jan 30 15:33:33.343733 containerd[1908]: time="2025-01-30T15:33:33.343679035Z" level=error msg="Failed to destroy network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343875 containerd[1908]: time="2025-01-30T15:33:33.343861458Z" level=error msg="encountered an error cleaning up failed sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343910 containerd[1908]: time="2025-01-30T15:33:33.343882174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745d85c999-h9vmg,Uid:12625ef2-af4d-498a-be42-4bc310bbd487,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343965 kubelet[3399]: E0130 15:33:33.343951 3399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.343986 kubelet[3399]: E0130 15:33:33.343975 3399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745d85c999-h9vmg" Jan 30 15:33:33.344005 kubelet[3399]: E0130 15:33:33.343989 3399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745d85c999-h9vmg" Jan 30 15:33:33.344022 kubelet[3399]: E0130 15:33:33.344010 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-745d85c999-h9vmg_calico-system(12625ef2-af4d-498a-be42-4bc310bbd487)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-745d85c999-h9vmg_calico-system(12625ef2-af4d-498a-be42-4bc310bbd487)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745d85c999-h9vmg" podUID="12625ef2-af4d-498a-be42-4bc310bbd487" Jan 30 15:33:33.344077 containerd[1908]: time="2025-01-30T15:33:33.344059321Z" level=error msg="Failed to destroy network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.344252 containerd[1908]: time="2025-01-30T15:33:33.344231495Z" level=error msg="encountered an error cleaning up failed sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.344295 containerd[1908]: time="2025-01-30T15:33:33.344260043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs9r5,Uid:5864f7d5-fb06-43dd-b6d9-86f374c2cf41,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.344360 kubelet[3399]: E0130 15:33:33.344343 3399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.344406 kubelet[3399]: E0130 15:33:33.344370 3399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bs9r5" Jan 30 15:33:33.344406 kubelet[3399]: E0130 15:33:33.344386 3399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bs9r5" Jan 30 15:33:33.344466 kubelet[3399]: E0130 15:33:33.344413 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bs9r5_calico-system(5864f7d5-fb06-43dd-b6d9-86f374c2cf41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bs9r5_calico-system(5864f7d5-fb06-43dd-b6d9-86f374c2cf41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bs9r5" podUID="5864f7d5-fb06-43dd-b6d9-86f374c2cf41" Jan 30 15:33:33.345509 containerd[1908]: time="2025-01-30T15:33:33.345490876Z" level=error msg="Failed to destroy network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.345709 containerd[1908]: time="2025-01-30T15:33:33.345693629Z" level=error msg="encountered an error cleaning up failed sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.345752 containerd[1908]: time="2025-01-30T15:33:33.345714625Z" level=error msg="Failed to destroy network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.345752 containerd[1908]: time="2025-01-30T15:33:33.345734785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-gv4f8,Uid:a950d8fb-24c2-4d89-81c7-1f97e95d8e16,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.345887 kubelet[3399]: E0130 15:33:33.345870 3399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.345914 kubelet[3399]: E0130 15:33:33.345897 3399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684d4dff56-gv4f8" Jan 30 15:33:33.345914 kubelet[3399]: E0130 15:33:33.345908 3399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684d4dff56-gv4f8" Jan 30 15:33:33.345956 kubelet[3399]: E0130 15:33:33.345928 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-684d4dff56-gv4f8_calico-apiserver(a950d8fb-24c2-4d89-81c7-1f97e95d8e16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-684d4dff56-gv4f8_calico-apiserver(a950d8fb-24c2-4d89-81c7-1f97e95d8e16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684d4dff56-gv4f8" podUID="a950d8fb-24c2-4d89-81c7-1f97e95d8e16" Jan 30 15:33:33.345990 containerd[1908]: time="2025-01-30T15:33:33.345914440Z" level=error msg="encountered an error cleaning up failed sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.345990 containerd[1908]: time="2025-01-30T15:33:33.345935052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-bv6pr,Uid:3efa98ef-abd2-46af-8ebe-522bd24dc469,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.346038 kubelet[3399]: E0130 15:33:33.345992 3399 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:33.346038 kubelet[3399]: E0130 15:33:33.346011 3399 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684d4dff56-bv6pr" Jan 30 15:33:33.346038 kubelet[3399]: E0130 15:33:33.346022 3399 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-684d4dff56-bv6pr" Jan 30 15:33:33.346092 kubelet[3399]: E0130 15:33:33.346038 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-684d4dff56-bv6pr_calico-apiserver(3efa98ef-abd2-46af-8ebe-522bd24dc469)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-684d4dff56-bv6pr_calico-apiserver(3efa98ef-abd2-46af-8ebe-522bd24dc469)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684d4dff56-bv6pr" podUID="3efa98ef-abd2-46af-8ebe-522bd24dc469" Jan 30 15:33:34.069814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4-shm.mount: Deactivated successfully. Jan 30 15:33:34.069895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49-shm.mount: Deactivated successfully. Jan 30 15:33:34.069947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb-shm.mount: Deactivated successfully. Jan 30 15:33:34.069995 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca-shm.mount: Deactivated successfully. Jan 30 15:33:34.070045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7-shm.mount: Deactivated successfully. Jan 30 15:33:34.070100 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17-shm.mount: Deactivated successfully. Jan 30 15:33:34.206738 kubelet[3399]: I0130 15:33:34.206685 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:33:34.208195 containerd[1908]: time="2025-01-30T15:33:34.208075567Z" level=info msg="StopPodSandbox for \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\"" Jan 30 15:33:34.209028 containerd[1908]: time="2025-01-30T15:33:34.208536203Z" level=info msg="Ensure that sandbox f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49 in task-service has been cleanup successfully" Jan 30 15:33:34.209155 kubelet[3399]: I0130 15:33:34.208914 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:33:34.210120 containerd[1908]: time="2025-01-30T15:33:34.210051417Z" level=info msg="StopPodSandbox for \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\"" Jan 30 15:33:34.210564 containerd[1908]: time="2025-01-30T15:33:34.210489297Z" level=info msg="Ensure that sandbox 0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb in task-service has been cleanup successfully" Jan 30 15:33:34.211361 kubelet[3399]: I0130 15:33:34.211315 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:33:34.212767 containerd[1908]: time="2025-01-30T15:33:34.212698561Z" level=info msg="StopPodSandbox for \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\"" Jan 30 15:33:34.213250 containerd[1908]: time="2025-01-30T15:33:34.213183600Z" level=info msg="Ensure that sandbox 9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4 in task-service has been cleanup successfully" Jan 30 15:33:34.214183 kubelet[3399]: I0130 15:33:34.214114 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:33:34.215678 containerd[1908]: time="2025-01-30T15:33:34.215597245Z" level=info msg="StopPodSandbox for \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\"" Jan 30 15:33:34.216165 containerd[1908]: time="2025-01-30T15:33:34.216104114Z" level=info msg="Ensure that sandbox fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca in task-service has been cleanup successfully" Jan 30 15:33:34.216624 kubelet[3399]: I0130 15:33:34.216564 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:33:34.217695 containerd[1908]: time="2025-01-30T15:33:34.217628219Z" level=info msg="StopPodSandbox for \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\"" Jan 30 15:33:34.218105 containerd[1908]: time="2025-01-30T15:33:34.218025908Z" level=info msg="Ensure that sandbox b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7 in task-service has been cleanup successfully" Jan 30 15:33:34.219163 kubelet[3399]: I0130 15:33:34.219121 3399 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:33:34.219571 containerd[1908]: time="2025-01-30T15:33:34.219550986Z" level=info msg="StopPodSandbox for \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\"" Jan 30 15:33:34.219696 containerd[1908]: time="2025-01-30T15:33:34.219684805Z" level=info msg="Ensure that sandbox f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17 in task-service has been cleanup successfully" Jan 30 15:33:34.221473 containerd[1908]: time="2025-01-30T15:33:34.221422603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 15:33:34.233061 containerd[1908]: time="2025-01-30T15:33:34.233019907Z" level=error msg="StopPodSandbox for \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\" failed" error="failed to destroy network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:34.233157 kubelet[3399]: E0130 15:33:34.233138 3399 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:33:34.233197 kubelet[3399]: E0130 15:33:34.233171 3399 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49"} Jan 30 15:33:34.233222 kubelet[3399]: E0130 15:33:34.233210 3399 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3efa98ef-abd2-46af-8ebe-522bd24dc469\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:33:34.233270 kubelet[3399]: E0130 15:33:34.233229 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3efa98ef-abd2-46af-8ebe-522bd24dc469\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684d4dff56-bv6pr" podUID="3efa98ef-abd2-46af-8ebe-522bd24dc469" Jan 30 15:33:34.235166 containerd[1908]: time="2025-01-30T15:33:34.235121028Z" level=error msg="StopPodSandbox for \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\" failed" error="failed to destroy network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:34.235273 containerd[1908]: time="2025-01-30T15:33:34.235209853Z" level=error msg="StopPodSandbox for \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\" failed" error="failed to destroy network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:34.235313 kubelet[3399]: E0130 15:33:34.235290 3399 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:33:34.235341 kubelet[3399]: E0130 15:33:34.235324 3399 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17"} Jan 30 15:33:34.235367 kubelet[3399]: E0130 15:33:34.235356 3399 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47aaec61-dc45-4322-ae57-2b2017382ed5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:33:34.235409 kubelet[3399]: E0130 15:33:34.235290 3399 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:33:34.235409 kubelet[3399]: E0130 15:33:34.235400 3399 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca"} Jan 30 15:33:34.235456 kubelet[3399]: E0130 15:33:34.235418 3399 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a950d8fb-24c2-4d89-81c7-1f97e95d8e16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:33:34.235456 kubelet[3399]: E0130 15:33:34.235428 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a950d8fb-24c2-4d89-81c7-1f97e95d8e16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-684d4dff56-gv4f8" podUID="a950d8fb-24c2-4d89-81c7-1f97e95d8e16" Jan 30 15:33:34.235456 kubelet[3399]: E0130 15:33:34.235377 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47aaec61-dc45-4322-ae57-2b2017382ed5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9scnd" podUID="47aaec61-dc45-4322-ae57-2b2017382ed5" Jan 30 15:33:34.235731 containerd[1908]: time="2025-01-30T15:33:34.235716932Z" level=error msg="StopPodSandbox for \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\" failed" error="failed to destroy network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:34.235799 kubelet[3399]: E0130 15:33:34.235788 3399 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:33:34.235830 kubelet[3399]: E0130 15:33:34.235803 3399 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb"} Jan 30 15:33:34.235830 kubelet[3399]: E0130 15:33:34.235818 3399 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12625ef2-af4d-498a-be42-4bc310bbd487\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:33:34.235895 kubelet[3399]: E0130 15:33:34.235836 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12625ef2-af4d-498a-be42-4bc310bbd487\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745d85c999-h9vmg" podUID="12625ef2-af4d-498a-be42-4bc310bbd487" Jan 30 15:33:34.235930 containerd[1908]: time="2025-01-30T15:33:34.235914615Z" level=error msg="StopPodSandbox for \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\" failed" error="failed to destroy network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:34.235989 kubelet[3399]: E0130 15:33:34.235977 3399 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:33:34.236011 kubelet[3399]: E0130 15:33:34.235993 3399 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4"} Jan 30 15:33:34.236011 kubelet[3399]: E0130 15:33:34.236008 3399 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5864f7d5-fb06-43dd-b6d9-86f374c2cf41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:33:34.236056 kubelet[3399]: E0130 15:33:34.236019 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5864f7d5-fb06-43dd-b6d9-86f374c2cf41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bs9r5" podUID="5864f7d5-fb06-43dd-b6d9-86f374c2cf41" Jan 30 15:33:34.236381 containerd[1908]: time="2025-01-30T15:33:34.236361853Z" level=error msg="StopPodSandbox for \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\" failed" error="failed to destroy network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 15:33:34.236434 kubelet[3399]: E0130 15:33:34.236422 3399 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:33:34.236458 kubelet[3399]: E0130 15:33:34.236437 3399 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7"} Jan 30 15:33:34.236458 kubelet[3399]: E0130 15:33:34.236452 3399 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d99c3914-c98d-41fc-8f33-a1ffbbccba09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 15:33:34.236510 kubelet[3399]: E0130 15:33:34.236462 3399 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d99c3914-c98d-41fc-8f33-a1ffbbccba09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bzqcg" podUID="d99c3914-c98d-41fc-8f33-a1ffbbccba09" Jan 30 15:33:37.327541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839704805.mount: Deactivated successfully. Jan 30 15:33:37.362625 containerd[1908]: time="2025-01-30T15:33:37.362577304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:37.362853 containerd[1908]: time="2025-01-30T15:33:37.362800085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 15:33:37.363182 containerd[1908]: time="2025-01-30T15:33:37.363141736Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:37.363997 containerd[1908]: time="2025-01-30T15:33:37.363956550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:37.364392 containerd[1908]: time="2025-01-30T15:33:37.364349644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.142877854s" Jan 30 15:33:37.364392 containerd[1908]: time="2025-01-30T15:33:37.364366937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 15:33:37.368580 containerd[1908]: time="2025-01-30T15:33:37.368191446Z" level=info msg="CreateContainer within sandbox \"cbe43edb72bf6ec4f72ea27cdd83b7968ea586c3e9c11a97d44b4d22fb9b15ff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 15:33:37.373170 containerd[1908]: time="2025-01-30T15:33:37.373126223Z" level=info msg="CreateContainer within sandbox \"cbe43edb72bf6ec4f72ea27cdd83b7968ea586c3e9c11a97d44b4d22fb9b15ff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d0a059ee854ef218898cc1487e6d3233845f3ec2d99301f0b4153ee1b858b23e\"" Jan 30 15:33:37.374080 containerd[1908]: time="2025-01-30T15:33:37.373624909Z" level=info msg="StartContainer for \"d0a059ee854ef218898cc1487e6d3233845f3ec2d99301f0b4153ee1b858b23e\"" Jan 30 15:33:37.416650 containerd[1908]: time="2025-01-30T15:33:37.416620979Z" level=info msg="StartContainer for \"d0a059ee854ef218898cc1487e6d3233845f3ec2d99301f0b4153ee1b858b23e\" returns successfully" Jan 30 15:33:37.487611 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 15:33:37.487664 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 15:33:38.249375 kubelet[3399]: I0130 15:33:38.249319 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r26mq" podStartSLOduration=1.827706265 podStartE2EDuration="13.249309424s" podCreationTimestamp="2025-01-30 15:33:25 +0000 UTC" firstStartedPulling="2025-01-30 15:33:25.943141834 +0000 UTC m=+19.870314377" lastFinishedPulling="2025-01-30 15:33:37.364744993 +0000 UTC m=+31.291917536" observedRunningTime="2025-01-30 15:33:38.248900125 +0000 UTC m=+32.176072666" watchObservedRunningTime="2025-01-30 15:33:38.249309424 +0000 UTC m=+32.176481961" Jan 30 15:33:45.114749 containerd[1908]: time="2025-01-30T15:33:45.114668008Z" level=info msg="StopPodSandbox for \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\"" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.188 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.188 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" iface="eth0" netns="/var/run/netns/cni-b9004750-c085-872b-905e-9b072ca9e50d" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.188 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" iface="eth0" netns="/var/run/netns/cni-b9004750-c085-872b-905e-9b072ca9e50d" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.188 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" iface="eth0" netns="/var/run/netns/cni-b9004750-c085-872b-905e-9b072ca9e50d" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.188 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.188 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.201 [INFO][5276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.201 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.201 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.205 [WARNING][5276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.205 [INFO][5276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.206 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:45.208617 containerd[1908]: 2025-01-30 15:33:45.207 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:33:45.208963 containerd[1908]: time="2025-01-30T15:33:45.208691350Z" level=info msg="TearDown network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\" successfully" Jan 30 15:33:45.208963 containerd[1908]: time="2025-01-30T15:33:45.208713152Z" level=info msg="StopPodSandbox for \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\" returns successfully" Jan 30 15:33:45.209258 containerd[1908]: time="2025-01-30T15:33:45.209214953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-gv4f8,Uid:a950d8fb-24c2-4d89-81c7-1f97e95d8e16,Namespace:calico-apiserver,Attempt:1,}" Jan 30 15:33:45.210888 systemd[1]: run-netns-cni\x2db9004750\x2dc085\x2d872b\x2d905e\x2d9b072ca9e50d.mount: Deactivated successfully. Jan 30 15:33:45.265933 systemd-networkd[1556]: caliac3c818f711: Link UP Jan 30 15:33:45.266033 systemd-networkd[1556]: caliac3c818f711: Gained carrier Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.225 [INFO][5289] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.230 [INFO][5289] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0 calico-apiserver-684d4dff56- calico-apiserver a950d8fb-24c2-4d89-81c7-1f97e95d8e16 728 0 2025-01-30 15:33:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684d4dff56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-8297fae690 calico-apiserver-684d4dff56-gv4f8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac3c818f711 [] []}} ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.230 [INFO][5289] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.245 [INFO][5310] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" HandleID="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.249 [INFO][5310] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" HandleID="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360d20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-8297fae690", "pod":"calico-apiserver-684d4dff56-gv4f8", "timestamp":"2025-01-30 15:33:45.245235553 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8297fae690", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.249 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.249 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.249 [INFO][5310] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8297fae690' Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.250 [INFO][5310] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.252 [INFO][5310] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.254 [INFO][5310] ipam/ipam.go 489: Trying affinity for 192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.255 [INFO][5310] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.256 [INFO][5310] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.256 [INFO][5310] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.256 [INFO][5310] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.258 [INFO][5310] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.261 [INFO][5310] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.65/26] block=192.168.112.64/26 handle="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.261 [INFO][5310] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.65/26] handle="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.261 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:45.276445 containerd[1908]: 2025-01-30 15:33:45.261 [INFO][5310] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.65/26] IPv6=[] ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" HandleID="k8s-pod-network.ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.276839 containerd[1908]: 2025-01-30 15:33:45.262 [INFO][5289] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"a950d8fb-24c2-4d89-81c7-1f97e95d8e16", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"", Pod:"calico-apiserver-684d4dff56-gv4f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac3c818f711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:45.276839 containerd[1908]: 2025-01-30 15:33:45.262 [INFO][5289] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.65/32] ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.276839 containerd[1908]: 2025-01-30 15:33:45.262 [INFO][5289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac3c818f711 ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.276839 containerd[1908]: 2025-01-30 15:33:45.266 [INFO][5289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.276839 containerd[1908]: 2025-01-30 15:33:45.266 [INFO][5289] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"a950d8fb-24c2-4d89-81c7-1f97e95d8e16", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b", Pod:"calico-apiserver-684d4dff56-gv4f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac3c818f711", MAC:"76:e8:48:9d:25:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:45.276839 containerd[1908]: 2025-01-30 15:33:45.274 [INFO][5289] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-gv4f8" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:33:45.285643 containerd[1908]: time="2025-01-30T15:33:45.285604313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:45.285643 containerd[1908]: time="2025-01-30T15:33:45.285629782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:45.285643 containerd[1908]: time="2025-01-30T15:33:45.285636529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:45.285759 containerd[1908]: time="2025-01-30T15:33:45.285679106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:45.353183 containerd[1908]: time="2025-01-30T15:33:45.353131987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-gv4f8,Uid:a950d8fb-24c2-4d89-81c7-1f97e95d8e16,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b\"" Jan 30 15:33:45.353911 containerd[1908]: time="2025-01-30T15:33:45.353898795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 15:33:46.115919 containerd[1908]: time="2025-01-30T15:33:46.115823070Z" level=info msg="StopPodSandbox for \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\"" Jan 30 15:33:46.117133 containerd[1908]: time="2025-01-30T15:33:46.115860212Z" level=info msg="StopPodSandbox for \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\"" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.150 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" iface="eth0" netns="/var/run/netns/cni-8703d8e2-47fd-60a3-64f6-b4a867c766d0" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" iface="eth0" netns="/var/run/netns/cni-8703d8e2-47fd-60a3-64f6-b4a867c766d0" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" iface="eth0" netns="/var/run/netns/cni-8703d8e2-47fd-60a3-64f6-b4a867c766d0" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.161 [INFO][5473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.161 [INFO][5473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.161 [INFO][5473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.165 [WARNING][5473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.165 [INFO][5473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.166 [INFO][5473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:46.167936 containerd[1908]: 2025-01-30 15:33:46.167 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:33:46.168335 containerd[1908]: time="2025-01-30T15:33:46.167987444Z" level=info msg="TearDown network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\" successfully" Jan 30 15:33:46.168335 containerd[1908]: time="2025-01-30T15:33:46.168002405Z" level=info msg="StopPodSandbox for \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\" returns successfully" Jan 30 15:33:46.168427 containerd[1908]: time="2025-01-30T15:33:46.168414638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9scnd,Uid:47aaec61-dc45-4322-ae57-2b2017382ed5,Namespace:kube-system,Attempt:1,}" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5448] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5448] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" iface="eth0" netns="/var/run/netns/cni-59ea9da5-913f-dea8-3ff5-37f59ea1bd70" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5448] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" iface="eth0" netns="/var/run/netns/cni-59ea9da5-913f-dea8-3ff5-37f59ea1bd70" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5448] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" iface="eth0" netns="/var/run/netns/cni-59ea9da5-913f-dea8-3ff5-37f59ea1bd70" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5448] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.151 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.161 [INFO][5474] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.161 [INFO][5474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.166 [INFO][5474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.170 [WARNING][5474] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.170 [INFO][5474] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.171 [INFO][5474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:46.172411 containerd[1908]: 2025-01-30 15:33:46.171 [INFO][5448] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:33:46.172725 containerd[1908]: time="2025-01-30T15:33:46.172469222Z" level=info msg="TearDown network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\" successfully" Jan 30 15:33:46.172725 containerd[1908]: time="2025-01-30T15:33:46.172483211Z" level=info msg="StopPodSandbox for \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\" returns successfully" Jan 30 15:33:46.172862 containerd[1908]: time="2025-01-30T15:33:46.172812958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-bv6pr,Uid:3efa98ef-abd2-46af-8ebe-522bd24dc469,Namespace:calico-apiserver,Attempt:1,}" Jan 30 15:33:46.211902 systemd[1]: run-netns-cni\x2d59ea9da5\x2d913f\x2ddea8\x2d3ff5\x2d37f59ea1bd70.mount: Deactivated successfully. Jan 30 15:33:46.211981 systemd[1]: run-netns-cni\x2d8703d8e2\x2d47fd\x2d60a3\x2d64f6\x2db4a867c766d0.mount: Deactivated successfully. Jan 30 15:33:46.224585 systemd-networkd[1556]: cali40c37a672eb: Link UP Jan 30 15:33:46.224716 systemd-networkd[1556]: cali40c37a672eb: Gained carrier Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.182 [INFO][5504] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.188 [INFO][5504] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0 coredns-7db6d8ff4d- kube-system 47aaec61-dc45-4322-ae57-2b2017382ed5 737 0 2025-01-30 15:33:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-8297fae690 coredns-7db6d8ff4d-9scnd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40c37a672eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.188 [INFO][5504] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.202 [INFO][5525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" HandleID="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.208 [INFO][5525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" HandleID="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003619d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-8297fae690", "pod":"coredns-7db6d8ff4d-9scnd", "timestamp":"2025-01-30 15:33:46.202840529 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8297fae690", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.208 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.208 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.208 [INFO][5525] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8297fae690' Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.209 [INFO][5525] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.211 [INFO][5525] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.213 [INFO][5525] ipam/ipam.go 489: Trying affinity for 192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.214 [INFO][5525] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.216 [INFO][5525] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.216 [INFO][5525] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.216 [INFO][5525] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5 Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.219 [INFO][5525] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.222 [INFO][5525] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.66/26] block=192.168.112.64/26 handle="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.222 [INFO][5525] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.66/26] handle="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.222 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:46.231010 containerd[1908]: 2025-01-30 15:33:46.222 [INFO][5525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.66/26] IPv6=[] ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" HandleID="k8s-pod-network.312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.231438 containerd[1908]: 2025-01-30 15:33:46.223 [INFO][5504] cni-plugin/k8s.go 386: Populated endpoint ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47aaec61-dc45-4322-ae57-2b2017382ed5", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"", Pod:"coredns-7db6d8ff4d-9scnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40c37a672eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:46.231438 containerd[1908]: 2025-01-30 15:33:46.223 [INFO][5504] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.66/32] ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.231438 containerd[1908]: 2025-01-30 15:33:46.223 [INFO][5504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40c37a672eb ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.231438 containerd[1908]: 2025-01-30 15:33:46.224 [INFO][5504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.231438 containerd[1908]: 2025-01-30 15:33:46.224 [INFO][5504] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47aaec61-dc45-4322-ae57-2b2017382ed5", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5", Pod:"coredns-7db6d8ff4d-9scnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40c37a672eb", MAC:"92:cf:e5:3d:bd:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:46.231438 containerd[1908]: 2025-01-30 15:33:46.230 [INFO][5504] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9scnd" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:33:46.240518 containerd[1908]: time="2025-01-30T15:33:46.240473155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:46.240518 containerd[1908]: time="2025-01-30T15:33:46.240501302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:46.240518 containerd[1908]: time="2025-01-30T15:33:46.240508264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:46.240643 containerd[1908]: time="2025-01-30T15:33:46.240559143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:46.245328 systemd-networkd[1556]: cali1caaac78a43: Link UP Jan 30 15:33:46.245447 systemd-networkd[1556]: cali1caaac78a43: Gained carrier Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.204 [INFO][5530] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.210 [INFO][5530] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0 calico-apiserver-684d4dff56- calico-apiserver 3efa98ef-abd2-46af-8ebe-522bd24dc469 738 0 2025-01-30 15:33:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684d4dff56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-8297fae690 calico-apiserver-684d4dff56-bv6pr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1caaac78a43 [] []}} ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.210 [INFO][5530] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.224 [INFO][5568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" HandleID="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.230 [INFO][5568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" HandleID="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-8297fae690", "pod":"calico-apiserver-684d4dff56-bv6pr", "timestamp":"2025-01-30 15:33:46.224370867 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8297fae690", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.230 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.230 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.230 [INFO][5568] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8297fae690' Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.231 [INFO][5568] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.234 [INFO][5568] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.236 [INFO][5568] ipam/ipam.go 489: Trying affinity for 192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.237 [INFO][5568] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.238 [INFO][5568] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.238 [INFO][5568] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.239 [INFO][5568] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926 Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.241 [INFO][5568] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.243 [INFO][5568] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.67/26] block=192.168.112.64/26 handle="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.243 [INFO][5568] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.67/26] handle="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.243 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:46.251177 containerd[1908]: 2025-01-30 15:33:46.243 [INFO][5568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.67/26] IPv6=[] ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" HandleID="k8s-pod-network.9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.251578 containerd[1908]: 2025-01-30 15:33:46.244 [INFO][5530] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efa98ef-abd2-46af-8ebe-522bd24dc469", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"", Pod:"calico-apiserver-684d4dff56-bv6pr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caaac78a43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:46.251578 containerd[1908]: 2025-01-30 15:33:46.244 [INFO][5530] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.67/32] ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.251578 containerd[1908]: 2025-01-30 15:33:46.244 [INFO][5530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1caaac78a43 ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.251578 containerd[1908]: 2025-01-30 15:33:46.245 [INFO][5530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.251578 containerd[1908]: 2025-01-30 15:33:46.245 [INFO][5530] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efa98ef-abd2-46af-8ebe-522bd24dc469", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926", Pod:"calico-apiserver-684d4dff56-bv6pr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caaac78a43", MAC:"0a:9c:17:cc:7e:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:46.251578 containerd[1908]: 2025-01-30 15:33:46.250 [INFO][5530] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926" Namespace="calico-apiserver" Pod="calico-apiserver-684d4dff56-bv6pr" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:33:46.260801 containerd[1908]: time="2025-01-30T15:33:46.260763673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:46.260801 containerd[1908]: time="2025-01-30T15:33:46.260794226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:46.260888 containerd[1908]: time="2025-01-30T15:33:46.260801448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:46.260888 containerd[1908]: time="2025-01-30T15:33:46.260847659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:46.277101 containerd[1908]: time="2025-01-30T15:33:46.277081124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9scnd,Uid:47aaec61-dc45-4322-ae57-2b2017382ed5,Namespace:kube-system,Attempt:1,} returns sandbox id \"312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5\"" Jan 30 15:33:46.278323 containerd[1908]: time="2025-01-30T15:33:46.278282953Z" level=info msg="CreateContainer within sandbox \"312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:33:46.282499 containerd[1908]: time="2025-01-30T15:33:46.282482450Z" level=info msg="CreateContainer within sandbox \"312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9c6075f9b36017892a2cc22b494655c8b08a87d223ffc57797e2110e545d63b\"" Jan 30 15:33:46.282711 containerd[1908]: time="2025-01-30T15:33:46.282668420Z" level=info msg="StartContainer for \"b9c6075f9b36017892a2cc22b494655c8b08a87d223ffc57797e2110e545d63b\"" Jan 30 15:33:46.287511 containerd[1908]: time="2025-01-30T15:33:46.287489937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684d4dff56-bv6pr,Uid:3efa98ef-abd2-46af-8ebe-522bd24dc469,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926\"" Jan 30 15:33:46.312606 containerd[1908]: time="2025-01-30T15:33:46.312585609Z" level=info msg="StartContainer for \"b9c6075f9b36017892a2cc22b494655c8b08a87d223ffc57797e2110e545d63b\" returns successfully" Jan 30 15:33:46.631825 systemd-networkd[1556]: caliac3c818f711: Gained IPv6LL Jan 30 15:33:46.966038 containerd[1908]: time="2025-01-30T15:33:46.966016140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:46.966215 containerd[1908]: time="2025-01-30T15:33:46.966196214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 15:33:46.966578 containerd[1908]: time="2025-01-30T15:33:46.966567526Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:46.967647 containerd[1908]: time="2025-01-30T15:33:46.967635119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:46.968420 containerd[1908]: time="2025-01-30T15:33:46.968404308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.614489546s" Jan 30 15:33:46.968459 containerd[1908]: time="2025-01-30T15:33:46.968422372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 15:33:46.968876 containerd[1908]: time="2025-01-30T15:33:46.968863979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 15:33:46.969383 containerd[1908]: time="2025-01-30T15:33:46.969370577Z" level=info msg="CreateContainer within sandbox \"ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 15:33:46.972710 containerd[1908]: time="2025-01-30T15:33:46.972695070Z" level=info msg="CreateContainer within sandbox \"ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bee5cab65c6074e2ed752f72a6c2380a987655cf513fb9dc00292830c4fadeac\"" Jan 30 15:33:46.972925 containerd[1908]: time="2025-01-30T15:33:46.972910653Z" level=info msg="StartContainer for \"bee5cab65c6074e2ed752f72a6c2380a987655cf513fb9dc00292830c4fadeac\"" Jan 30 15:33:47.025236 containerd[1908]: time="2025-01-30T15:33:47.025212916Z" level=info msg="StartContainer for \"bee5cab65c6074e2ed752f72a6c2380a987655cf513fb9dc00292830c4fadeac\" returns successfully" Jan 30 15:33:47.113782 containerd[1908]: time="2025-01-30T15:33:47.113741170Z" level=info msg="StopPodSandbox for \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\"" Jan 30 15:33:47.113842 containerd[1908]: time="2025-01-30T15:33:47.113747468Z" level=info msg="StopPodSandbox for \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\"" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.135 [INFO][5861] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.135 [INFO][5861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" iface="eth0" netns="/var/run/netns/cni-233d9fa2-ce6c-7fde-9021-5fdee84fcd48" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.135 [INFO][5861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" iface="eth0" netns="/var/run/netns/cni-233d9fa2-ce6c-7fde-9021-5fdee84fcd48" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.135 [INFO][5861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" iface="eth0" netns="/var/run/netns/cni-233d9fa2-ce6c-7fde-9021-5fdee84fcd48" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.135 [INFO][5861] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.135 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.144 [INFO][5891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.144 [INFO][5891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.144 [INFO][5891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.148 [WARNING][5891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.148 [INFO][5891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.149 [INFO][5891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:47.150492 containerd[1908]: 2025-01-30 15:33:47.149 [INFO][5861] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:33:47.150954 containerd[1908]: time="2025-01-30T15:33:47.150576305Z" level=info msg="TearDown network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\" successfully" Jan 30 15:33:47.150954 containerd[1908]: time="2025-01-30T15:33:47.150595992Z" level=info msg="StopPodSandbox for \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\" returns successfully" Jan 30 15:33:47.151002 containerd[1908]: time="2025-01-30T15:33:47.150984931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745d85c999-h9vmg,Uid:12625ef2-af4d-498a-be42-4bc310bbd487,Namespace:calico-system,Attempt:1,}" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.134 [INFO][5862] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.134 [INFO][5862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" iface="eth0" netns="/var/run/netns/cni-0a739a16-6840-c5f2-62c4-e1a4af816f6a" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.134 [INFO][5862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" iface="eth0" netns="/var/run/netns/cni-0a739a16-6840-c5f2-62c4-e1a4af816f6a" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.134 [INFO][5862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" iface="eth0" netns="/var/run/netns/cni-0a739a16-6840-c5f2-62c4-e1a4af816f6a" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.134 [INFO][5862] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.134 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.145 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.145 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.149 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.152 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.152 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.153 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:47.154435 containerd[1908]: 2025-01-30 15:33:47.153 [INFO][5862] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:33:47.154712 containerd[1908]: time="2025-01-30T15:33:47.154498018Z" level=info msg="TearDown network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\" successfully" Jan 30 15:33:47.154712 containerd[1908]: time="2025-01-30T15:33:47.154510582Z" level=info msg="StopPodSandbox for \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\" returns successfully" Jan 30 15:33:47.154772 containerd[1908]: time="2025-01-30T15:33:47.154760805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bzqcg,Uid:d99c3914-c98d-41fc-8f33-a1ffbbccba09,Namespace:kube-system,Attempt:1,}" Jan 30 15:33:47.205476 systemd-networkd[1556]: cali585cfbc1a5f: Link UP Jan 30 15:33:47.205580 systemd-networkd[1556]: cali585cfbc1a5f: Gained carrier Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.167 [INFO][5919] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.172 [INFO][5919] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0 calico-kube-controllers-745d85c999- calico-system 12625ef2-af4d-498a-be42-4bc310bbd487 755 0 2025-01-30 15:33:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:745d85c999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-8297fae690 calico-kube-controllers-745d85c999-h9vmg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali585cfbc1a5f [] []}} ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.172 [INFO][5919] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.186 [INFO][5965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" HandleID="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.190 [INFO][5965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" HandleID="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051470), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-8297fae690", "pod":"calico-kube-controllers-745d85c999-h9vmg", "timestamp":"2025-01-30 15:33:47.186460731 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8297fae690", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.190 [INFO][5965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.190 [INFO][5965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.190 [INFO][5965] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8297fae690' Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.191 [INFO][5965] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.193 [INFO][5965] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.195 [INFO][5965] ipam/ipam.go 489: Trying affinity for 192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.196 [INFO][5965] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.197 [INFO][5965] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.197 [INFO][5965] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.198 [INFO][5965] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.200 [INFO][5965] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.203 [INFO][5965] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.68/26] block=192.168.112.64/26 handle="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.203 [INFO][5965] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.68/26] handle="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.203 [INFO][5965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:47.210862 containerd[1908]: 2025-01-30 15:33:47.203 [INFO][5965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.68/26] IPv6=[] ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" HandleID="k8s-pod-network.40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.211284 containerd[1908]: 2025-01-30 15:33:47.204 [INFO][5919] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0", GenerateName:"calico-kube-controllers-745d85c999-", Namespace:"calico-system", SelfLink:"", UID:"12625ef2-af4d-498a-be42-4bc310bbd487", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745d85c999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"", Pod:"calico-kube-controllers-745d85c999-h9vmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali585cfbc1a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:47.211284 containerd[1908]: 2025-01-30 15:33:47.204 [INFO][5919] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.68/32] ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.211284 containerd[1908]: 2025-01-30 15:33:47.204 [INFO][5919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali585cfbc1a5f ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.211284 containerd[1908]: 2025-01-30 15:33:47.205 [INFO][5919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.211284 containerd[1908]: 2025-01-30 15:33:47.205 [INFO][5919] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0", GenerateName:"calico-kube-controllers-745d85c999-", Namespace:"calico-system", SelfLink:"", UID:"12625ef2-af4d-498a-be42-4bc310bbd487", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745d85c999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e", Pod:"calico-kube-controllers-745d85c999-h9vmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali585cfbc1a5f", MAC:"0e:02:cb:16:da:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:47.211284 containerd[1908]: 2025-01-30 15:33:47.210 [INFO][5919] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e" Namespace="calico-system" Pod="calico-kube-controllers-745d85c999-h9vmg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:33:47.213499 systemd[1]: run-netns-cni\x2d233d9fa2\x2dce6c\x2d7fde\x2d9021\x2d5fdee84fcd48.mount: Deactivated successfully. Jan 30 15:33:47.213593 systemd[1]: run-netns-cni\x2d0a739a16\x2d6840\x2dc5f2\x2d62c4\x2de1a4af816f6a.mount: Deactivated successfully. Jan 30 15:33:47.219911 systemd-networkd[1556]: cali9b7a10c4ed0: Link UP Jan 30 15:33:47.220017 systemd-networkd[1556]: cali9b7a10c4ed0: Gained carrier Jan 30 15:33:47.220870 containerd[1908]: time="2025-01-30T15:33:47.220827496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:47.220870 containerd[1908]: time="2025-01-30T15:33:47.220862576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:47.220870 containerd[1908]: time="2025-01-30T15:33:47.220869729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:47.220962 containerd[1908]: time="2025-01-30T15:33:47.220913811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.169 [INFO][5929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.173 [INFO][5929] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0 coredns-7db6d8ff4d- kube-system d99c3914-c98d-41fc-8f33-a1ffbbccba09 754 0 2025-01-30 15:33:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-8297fae690 coredns-7db6d8ff4d-bzqcg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b7a10c4ed0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.173 [INFO][5929] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.186 [INFO][5970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" HandleID="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.192 [INFO][5970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" HandleID="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-8297fae690", "pod":"coredns-7db6d8ff4d-bzqcg", "timestamp":"2025-01-30 15:33:47.186460708 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8297fae690", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.192 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.203 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.203 [INFO][5970] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8297fae690' Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.204 [INFO][5970] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.206 [INFO][5970] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.209 [INFO][5970] ipam/ipam.go 489: Trying affinity for 192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.210 [INFO][5970] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.212 [INFO][5970] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.212 [INFO][5970] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.212 [INFO][5970] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.214 [INFO][5970] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.218 [INFO][5970] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.69/26] block=192.168.112.64/26 handle="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.218 [INFO][5970] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.69/26] handle="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.218 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:47.226635 containerd[1908]: 2025-01-30 15:33:47.218 [INFO][5970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.69/26] IPv6=[] ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" HandleID="k8s-pod-network.91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.227032 containerd[1908]: 2025-01-30 15:33:47.219 [INFO][5929] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d99c3914-c98d-41fc-8f33-a1ffbbccba09", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"", Pod:"coredns-7db6d8ff4d-bzqcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7a10c4ed0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:47.227032 containerd[1908]: 2025-01-30 15:33:47.219 [INFO][5929] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.69/32] ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.227032 containerd[1908]: 2025-01-30 15:33:47.219 [INFO][5929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b7a10c4ed0 ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.227032 containerd[1908]: 2025-01-30 15:33:47.220 [INFO][5929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.227032 containerd[1908]: 2025-01-30 15:33:47.220 [INFO][5929] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d99c3914-c98d-41fc-8f33-a1ffbbccba09", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa", Pod:"coredns-7db6d8ff4d-bzqcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7a10c4ed0", MAC:"fa:55:7f:32:31:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:47.227032 containerd[1908]: 2025-01-30 15:33:47.225 [INFO][5929] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bzqcg" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:33:47.235519 containerd[1908]: time="2025-01-30T15:33:47.235477358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:47.235519 containerd[1908]: time="2025-01-30T15:33:47.235505116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:47.235519 containerd[1908]: time="2025-01-30T15:33:47.235515641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:47.235620 containerd[1908]: time="2025-01-30T15:33:47.235564536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:47.259682 containerd[1908]: time="2025-01-30T15:33:47.259649401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745d85c999-h9vmg,Uid:12625ef2-af4d-498a-be42-4bc310bbd487,Namespace:calico-system,Attempt:1,} returns sandbox id \"40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e\"" Jan 30 15:33:47.262073 containerd[1908]: time="2025-01-30T15:33:47.262049325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bzqcg,Uid:d99c3914-c98d-41fc-8f33-a1ffbbccba09,Namespace:kube-system,Attempt:1,} returns sandbox id \"91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa\"" Jan 30 15:33:47.263430 containerd[1908]: time="2025-01-30T15:33:47.263415456Z" level=info msg="CreateContainer within sandbox \"91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:33:47.266491 kubelet[3399]: I0130 15:33:47.266459 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-684d4dff56-gv4f8" podStartSLOduration=20.651411697 podStartE2EDuration="22.266446489s" podCreationTimestamp="2025-01-30 15:33:25 +0000 UTC" firstStartedPulling="2025-01-30 15:33:45.353768692 +0000 UTC m=+39.280941233" lastFinishedPulling="2025-01-30 15:33:46.968803485 +0000 UTC m=+40.895976025" observedRunningTime="2025-01-30 15:33:47.266274681 +0000 UTC m=+41.193447223" watchObservedRunningTime="2025-01-30 15:33:47.266446489 +0000 UTC m=+41.193619026" Jan 30 15:33:47.266854 kubelet[3399]: I0130 15:33:47.266667 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9scnd" podStartSLOduration=28.266661595 podStartE2EDuration="28.266661595s" podCreationTimestamp="2025-01-30 15:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:33:47.26153206 +0000 UTC m=+41.188704600" watchObservedRunningTime="2025-01-30 15:33:47.266661595 +0000 UTC m=+41.193834132" Jan 30 15:33:47.267795 containerd[1908]: time="2025-01-30T15:33:47.267747678Z" level=info msg="CreateContainer within sandbox \"91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e016b7875a8728a460211687ce679eaad042d9a2c71e0a1cc858fee8dfa725c\"" Jan 30 15:33:47.268039 containerd[1908]: time="2025-01-30T15:33:47.268025472Z" level=info msg="StartContainer for \"4e016b7875a8728a460211687ce679eaad042d9a2c71e0a1cc858fee8dfa725c\"" Jan 30 15:33:47.297417 containerd[1908]: time="2025-01-30T15:33:47.297395263Z" level=info msg="StartContainer for \"4e016b7875a8728a460211687ce679eaad042d9a2c71e0a1cc858fee8dfa725c\" returns successfully" Jan 30 15:33:47.335721 systemd-networkd[1556]: cali40c37a672eb: Gained IPv6LL Jan 30 15:33:47.347142 containerd[1908]: time="2025-01-30T15:33:47.347122079Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:47.347380 containerd[1908]: time="2025-01-30T15:33:47.347358989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 15:33:47.348635 containerd[1908]: time="2025-01-30T15:33:47.348574418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 379.695643ms" Jan 30 15:33:47.348635 containerd[1908]: time="2025-01-30T15:33:47.348591416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 15:33:47.349033 containerd[1908]: time="2025-01-30T15:33:47.349021857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 15:33:47.349811 containerd[1908]: time="2025-01-30T15:33:47.349706573Z" level=info msg="CreateContainer within sandbox \"9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 15:33:47.353686 containerd[1908]: time="2025-01-30T15:33:47.353642029Z" level=info msg="CreateContainer within sandbox \"9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8cc4862874cca414d5a9a9e1f276264bcfb0be221e25f73458b037cfb19ba274\"" Jan 30 15:33:47.353972 containerd[1908]: time="2025-01-30T15:33:47.353920259Z" level=info msg="StartContainer for \"8cc4862874cca414d5a9a9e1f276264bcfb0be221e25f73458b037cfb19ba274\"" Jan 30 15:33:47.397253 containerd[1908]: time="2025-01-30T15:33:47.397232938Z" level=info msg="StartContainer for \"8cc4862874cca414d5a9a9e1f276264bcfb0be221e25f73458b037cfb19ba274\" returns successfully" Jan 30 15:33:47.399625 systemd-networkd[1556]: cali1caaac78a43: Gained IPv6LL Jan 30 15:33:48.262338 kubelet[3399]: I0130 15:33:48.262314 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:33:48.271579 kubelet[3399]: I0130 15:33:48.271486 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bzqcg" podStartSLOduration=29.271457078 podStartE2EDuration="29.271457078s" podCreationTimestamp="2025-01-30 15:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:33:48.270723249 +0000 UTC m=+42.197895820" watchObservedRunningTime="2025-01-30 15:33:48.271457078 +0000 UTC m=+42.198629637" Jan 30 15:33:48.299636 kubelet[3399]: I0130 15:33:48.299511 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-684d4dff56-bv6pr" podStartSLOduration=22.238614682 podStartE2EDuration="23.299482623s" podCreationTimestamp="2025-01-30 15:33:25 +0000 UTC" firstStartedPulling="2025-01-30 15:33:46.28810394 +0000 UTC m=+40.215276483" lastFinishedPulling="2025-01-30 15:33:47.348971884 +0000 UTC m=+41.276144424" observedRunningTime="2025-01-30 15:33:48.298730459 +0000 UTC m=+42.225903081" watchObservedRunningTime="2025-01-30 15:33:48.299482623 +0000 UTC m=+42.226655190" Jan 30 15:33:48.984007 containerd[1908]: time="2025-01-30T15:33:48.983953946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:48.984234 containerd[1908]: time="2025-01-30T15:33:48.984190406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 15:33:48.984573 containerd[1908]: time="2025-01-30T15:33:48.984533625Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:48.985439 containerd[1908]: time="2025-01-30T15:33:48.985398600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:48.985827 containerd[1908]: time="2025-01-30T15:33:48.985785514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.636749516s" Jan 30 15:33:48.985827 containerd[1908]: time="2025-01-30T15:33:48.985803598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 15:33:48.989801 containerd[1908]: time="2025-01-30T15:33:48.989781985Z" level=info msg="CreateContainer within sandbox \"40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 15:33:48.993465 containerd[1908]: time="2025-01-30T15:33:48.993449920Z" level=info msg="CreateContainer within sandbox \"40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bae8b54260a1d33baba1f61d4c1e9a53223b38fd5c11ae6f376ac6ffc1cb2a3c\"" Jan 30 15:33:48.993698 containerd[1908]: time="2025-01-30T15:33:48.993684445Z" level=info msg="StartContainer for \"bae8b54260a1d33baba1f61d4c1e9a53223b38fd5c11ae6f376ac6ffc1cb2a3c\"" Jan 30 15:33:49.038157 containerd[1908]: time="2025-01-30T15:33:49.038135089Z" level=info msg="StartContainer for \"bae8b54260a1d33baba1f61d4c1e9a53223b38fd5c11ae6f376ac6ffc1cb2a3c\" returns successfully" Jan 30 15:33:49.063604 systemd-networkd[1556]: cali585cfbc1a5f: Gained IPv6LL Jan 30 15:33:49.063795 systemd-networkd[1556]: cali9b7a10c4ed0: Gained IPv6LL Jan 30 15:33:49.114983 containerd[1908]: time="2025-01-30T15:33:49.114866798Z" level=info msg="StopPodSandbox for \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\"" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.182 [INFO][6360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.182 [INFO][6360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" iface="eth0" netns="/var/run/netns/cni-734a0870-9e90-4a06-1f57-f7ee67c3caa3" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.182 [INFO][6360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" iface="eth0" netns="/var/run/netns/cni-734a0870-9e90-4a06-1f57-f7ee67c3caa3" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.182 [INFO][6360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" iface="eth0" netns="/var/run/netns/cni-734a0870-9e90-4a06-1f57-f7ee67c3caa3" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.182 [INFO][6360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.182 [INFO][6360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.197 [INFO][6377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.198 [INFO][6377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.198 [INFO][6377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.208 [WARNING][6377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.208 [INFO][6377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.209 [INFO][6377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:49.210604 containerd[1908]: 2025-01-30 15:33:49.209 [INFO][6360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:33:49.210914 containerd[1908]: time="2025-01-30T15:33:49.210687933Z" level=info msg="TearDown network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\" successfully" Jan 30 15:33:49.210914 containerd[1908]: time="2025-01-30T15:33:49.210705190Z" level=info msg="StopPodSandbox for \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\" returns successfully" Jan 30 15:33:49.211117 containerd[1908]: time="2025-01-30T15:33:49.211102441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs9r5,Uid:5864f7d5-fb06-43dd-b6d9-86f374c2cf41,Namespace:calico-system,Attempt:1,}" Jan 30 15:33:49.213571 systemd[1]: run-netns-cni\x2d734a0870\x2d9e90\x2d4a06\x2d1f57\x2df7ee67c3caa3.mount: Deactivated successfully. Jan 30 15:33:49.264998 systemd-networkd[1556]: cali2d919b63bcb: Link UP Jan 30 15:33:49.265108 systemd-networkd[1556]: cali2d919b63bcb: Gained carrier Jan 30 15:33:49.270257 kubelet[3399]: I0130 15:33:49.270207 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-745d85c999-h9vmg" podStartSLOduration=22.544263206 podStartE2EDuration="24.270190472s" podCreationTimestamp="2025-01-30 15:33:25 +0000 UTC" firstStartedPulling="2025-01-30 15:33:47.260242536 +0000 UTC m=+41.187415079" lastFinishedPulling="2025-01-30 15:33:48.986169805 +0000 UTC m=+42.913342345" observedRunningTime="2025-01-30 15:33:49.270097834 +0000 UTC m=+43.197270376" watchObservedRunningTime="2025-01-30 15:33:49.270190472 +0000 UTC m=+43.197363012" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.225 [INFO][6393] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.231 [INFO][6393] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0 csi-node-driver- calico-system 5864f7d5-fb06-43dd-b6d9-86f374c2cf41 801 0 2025-01-30 15:33:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-8297fae690 csi-node-driver-bs9r5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2d919b63bcb [] []}} ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.231 [INFO][6393] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.244 [INFO][6412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" HandleID="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.249 [INFO][6412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" HandleID="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-8297fae690", "pod":"csi-node-driver-bs9r5", "timestamp":"2025-01-30 15:33:49.244509097 +0000 UTC"}, Hostname:"ci-4081.3.0-a-8297fae690", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.249 [INFO][6412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.249 [INFO][6412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.249 [INFO][6412] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-8297fae690' Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.250 [INFO][6412] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.253 [INFO][6412] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.255 [INFO][6412] ipam/ipam.go 489: Trying affinity for 192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.256 [INFO][6412] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.257 [INFO][6412] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.257 [INFO][6412] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.258 [INFO][6412] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.260 [INFO][6412] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.64/26 handle="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.263 [INFO][6412] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.70/26] block=192.168.112.64/26 handle="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.263 [INFO][6412] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.70/26] handle="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" host="ci-4081.3.0-a-8297fae690" Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.263 [INFO][6412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:33:49.271623 containerd[1908]: 2025-01-30 15:33:49.263 [INFO][6412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.70/26] IPv6=[] ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" HandleID="k8s-pod-network.28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.272040 containerd[1908]: 2025-01-30 15:33:49.264 [INFO][6393] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5864f7d5-fb06-43dd-b6d9-86f374c2cf41", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"", Pod:"csi-node-driver-bs9r5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d919b63bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:49.272040 containerd[1908]: 2025-01-30 15:33:49.264 [INFO][6393] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.70/32] ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.272040 containerd[1908]: 2025-01-30 15:33:49.264 [INFO][6393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d919b63bcb ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.272040 containerd[1908]: 2025-01-30 15:33:49.265 [INFO][6393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.272040 containerd[1908]: 2025-01-30 15:33:49.265 [INFO][6393] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5864f7d5-fb06-43dd-b6d9-86f374c2cf41", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a", Pod:"csi-node-driver-bs9r5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d919b63bcb", MAC:"0a:e5:18:b7:73:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:33:49.272040 containerd[1908]: 2025-01-30 15:33:49.270 [INFO][6393] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a" Namespace="calico-system" Pod="csi-node-driver-bs9r5" WorkloadEndpoint="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:33:49.281077 containerd[1908]: time="2025-01-30T15:33:49.281026717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:33:49.281077 containerd[1908]: time="2025-01-30T15:33:49.281063980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:33:49.281077 containerd[1908]: time="2025-01-30T15:33:49.281071412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:49.281187 containerd[1908]: time="2025-01-30T15:33:49.281123527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:33:49.295590 containerd[1908]: time="2025-01-30T15:33:49.295567902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bs9r5,Uid:5864f7d5-fb06-43dd-b6d9-86f374c2cf41,Namespace:calico-system,Attempt:1,} returns sandbox id \"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a\"" Jan 30 15:33:49.296282 containerd[1908]: time="2025-01-30T15:33:49.296272675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 15:33:50.535632 systemd-networkd[1556]: cali2d919b63bcb: Gained IPv6LL Jan 30 15:33:50.584495 containerd[1908]: time="2025-01-30T15:33:50.584471043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:50.584819 containerd[1908]: time="2025-01-30T15:33:50.584769181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 15:33:50.585154 containerd[1908]: time="2025-01-30T15:33:50.585141489Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:50.586152 containerd[1908]: time="2025-01-30T15:33:50.586138463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:50.586637 containerd[1908]: time="2025-01-30T15:33:50.586606066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.290316685s" Jan 30 15:33:50.586670 containerd[1908]: time="2025-01-30T15:33:50.586639562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 15:33:50.588092 containerd[1908]: time="2025-01-30T15:33:50.588080220Z" level=info msg="CreateContainer within sandbox \"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 15:33:50.593765 containerd[1908]: time="2025-01-30T15:33:50.593746198Z" level=info msg="CreateContainer within sandbox \"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"07a4c0e4acbdaf81300813fcc9af58c56d3284329c76b64237ea7f2349956974\"" Jan 30 15:33:50.594192 containerd[1908]: time="2025-01-30T15:33:50.594162630Z" level=info msg="StartContainer for \"07a4c0e4acbdaf81300813fcc9af58c56d3284329c76b64237ea7f2349956974\"" Jan 30 15:33:50.638994 containerd[1908]: time="2025-01-30T15:33:50.638970508Z" level=info msg="StartContainer for \"07a4c0e4acbdaf81300813fcc9af58c56d3284329c76b64237ea7f2349956974\" returns successfully" Jan 30 15:33:50.639597 containerd[1908]: time="2025-01-30T15:33:50.639583781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 15:33:51.961630 containerd[1908]: time="2025-01-30T15:33:51.961587574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:51.961879 containerd[1908]: time="2025-01-30T15:33:51.961809495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 15:33:51.962058 containerd[1908]: time="2025-01-30T15:33:51.962045512Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:51.963128 containerd[1908]: time="2025-01-30T15:33:51.963109795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:33:51.963514 containerd[1908]: time="2025-01-30T15:33:51.963499879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.323898192s" Jan 30 15:33:51.963560 containerd[1908]: time="2025-01-30T15:33:51.963516263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 15:33:51.964638 containerd[1908]: time="2025-01-30T15:33:51.964624100Z" level=info msg="CreateContainer within sandbox \"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 15:33:51.968773 containerd[1908]: time="2025-01-30T15:33:51.968730251Z" level=info msg="CreateContainer within sandbox \"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8abbcbaac3acc0dbbf7a19aa823aca4ee1da516c1596209ffa504d293d5f222d\"" Jan 30 15:33:51.968963 containerd[1908]: time="2025-01-30T15:33:51.968897466Z" level=info msg="StartContainer for \"8abbcbaac3acc0dbbf7a19aa823aca4ee1da516c1596209ffa504d293d5f222d\"" Jan 30 15:33:52.007608 containerd[1908]: time="2025-01-30T15:33:52.007583239Z" level=info msg="StartContainer for \"8abbcbaac3acc0dbbf7a19aa823aca4ee1da516c1596209ffa504d293d5f222d\" returns successfully" Jan 30 15:33:52.159144 kubelet[3399]: I0130 15:33:52.159124 3399 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 15:33:52.159144 kubelet[3399]: I0130 15:33:52.159147 3399 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 15:33:52.198193 kubelet[3399]: I0130 15:33:52.198125 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:33:52.306181 kubelet[3399]: I0130 15:33:52.305964 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bs9r5" podStartSLOduration=24.638157982 podStartE2EDuration="27.305919048s" podCreationTimestamp="2025-01-30 15:33:25 +0000 UTC" firstStartedPulling="2025-01-30 15:33:49.296154698 +0000 UTC m=+43.223327239" lastFinishedPulling="2025-01-30 15:33:51.963915764 +0000 UTC m=+45.891088305" observedRunningTime="2025-01-30 15:33:52.30528844 +0000 UTC m=+46.232461041" watchObservedRunningTime="2025-01-30 15:33:52.305919048 +0000 UTC m=+46.233091634" Jan 30 15:33:53.054620 kernel: bpftool[6732]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 15:33:53.209210 systemd-networkd[1556]: vxlan.calico: Link UP Jan 30 15:33:53.209214 systemd-networkd[1556]: vxlan.calico: Gained carrier Jan 30 15:33:54.631830 systemd-networkd[1556]: vxlan.calico: Gained IPv6LL Jan 30 15:34:06.110038 containerd[1908]: time="2025-01-30T15:34:06.109923539Z" level=info msg="StopPodSandbox for \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\"" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.144 [WARNING][6971] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efa98ef-abd2-46af-8ebe-522bd24dc469", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926", Pod:"calico-apiserver-684d4dff56-bv6pr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caaac78a43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.144 [INFO][6971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.144 [INFO][6971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" iface="eth0" netns="" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.144 [INFO][6971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.144 [INFO][6971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.162 [INFO][6986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.162 [INFO][6986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.162 [INFO][6986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.168 [WARNING][6986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.168 [INFO][6986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.169 [INFO][6986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.171671 containerd[1908]: 2025-01-30 15:34:06.170 [INFO][6971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.171671 containerd[1908]: time="2025-01-30T15:34:06.171646178Z" level=info msg="TearDown network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\" successfully" Jan 30 15:34:06.171671 containerd[1908]: time="2025-01-30T15:34:06.171668905Z" level=info msg="StopPodSandbox for \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\" returns successfully" Jan 30 15:34:06.172302 containerd[1908]: time="2025-01-30T15:34:06.172139108Z" level=info msg="RemovePodSandbox for \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\"" Jan 30 15:34:06.172302 containerd[1908]: time="2025-01-30T15:34:06.172170398Z" level=info msg="Forcibly stopping sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\"" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.202 [WARNING][7017] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efa98ef-abd2-46af-8ebe-522bd24dc469", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"9616b1d8f385f983fde59432c6c4cc0b74813097ad30e78e5055f6974246b926", Pod:"calico-apiserver-684d4dff56-bv6pr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1caaac78a43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.202 [INFO][7017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.202 [INFO][7017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" iface="eth0" netns="" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.202 [INFO][7017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.202 [INFO][7017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.222 [INFO][7032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.222 [INFO][7032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.222 [INFO][7032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.228 [WARNING][7032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.228 [INFO][7032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" HandleID="k8s-pod-network.f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--bv6pr-eth0" Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.229 [INFO][7032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.231790 containerd[1908]: 2025-01-30 15:34:06.230 [INFO][7017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49" Jan 30 15:34:06.232302 containerd[1908]: time="2025-01-30T15:34:06.231820951Z" level=info msg="TearDown network for sandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\" successfully" Jan 30 15:34:06.233722 containerd[1908]: time="2025-01-30T15:34:06.233709285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:34:06.233762 containerd[1908]: time="2025-01-30T15:34:06.233739005Z" level=info msg="RemovePodSandbox \"f27e1466159589c7eaaf89a8eb3ca887adb09ffb1699522f9f46ddbbe77ddd49\" returns successfully" Jan 30 15:34:06.234041 containerd[1908]: time="2025-01-30T15:34:06.234030398Z" level=info msg="StopPodSandbox for \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\"" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.253 [WARNING][7065] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d99c3914-c98d-41fc-8f33-a1ffbbccba09", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa", Pod:"coredns-7db6d8ff4d-bzqcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7a10c4ed0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.253 [INFO][7065] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.253 [INFO][7065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" iface="eth0" netns="" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.253 [INFO][7065] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.253 [INFO][7065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.265 [INFO][7078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.265 [INFO][7078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.265 [INFO][7078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.269 [WARNING][7078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.269 [INFO][7078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.271 [INFO][7078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.272484 containerd[1908]: 2025-01-30 15:34:06.271 [INFO][7065] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.272484 containerd[1908]: time="2025-01-30T15:34:06.272478140Z" level=info msg="TearDown network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\" successfully" Jan 30 15:34:06.272866 containerd[1908]: time="2025-01-30T15:34:06.272494723Z" level=info msg="StopPodSandbox for \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\" returns successfully" Jan 30 15:34:06.272866 containerd[1908]: time="2025-01-30T15:34:06.272771584Z" level=info msg="RemovePodSandbox for \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\"" Jan 30 15:34:06.272866 containerd[1908]: time="2025-01-30T15:34:06.272790471Z" level=info msg="Forcibly stopping sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\"" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.294 [WARNING][7107] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d99c3914-c98d-41fc-8f33-a1ffbbccba09", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"91aba1300f160f71d99ac0196ccf7ed0bbee92cfc890780d381c1915777dfbfa", Pod:"coredns-7db6d8ff4d-bzqcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7a10c4ed0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.294 [INFO][7107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.294 [INFO][7107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" iface="eth0" netns="" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.294 [INFO][7107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.294 [INFO][7107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.306 [INFO][7124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.306 [INFO][7124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.306 [INFO][7124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.311 [WARNING][7124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.311 [INFO][7124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" HandleID="k8s-pod-network.b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--bzqcg-eth0" Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.312 [INFO][7124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.313903 containerd[1908]: 2025-01-30 15:34:06.313 [INFO][7107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7" Jan 30 15:34:06.313903 containerd[1908]: time="2025-01-30T15:34:06.313897455Z" level=info msg="TearDown network for sandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\" successfully" Jan 30 15:34:06.315462 containerd[1908]: time="2025-01-30T15:34:06.315448100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:34:06.315493 containerd[1908]: time="2025-01-30T15:34:06.315475168Z" level=info msg="RemovePodSandbox \"b0f3907ffb2fc582b30b7d993d5469ed51de2970ee5da924ba66499048347cf7\" returns successfully" Jan 30 15:34:06.315760 containerd[1908]: time="2025-01-30T15:34:06.315749300Z" level=info msg="StopPodSandbox for \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\"" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.334 [WARNING][7153] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0", GenerateName:"calico-kube-controllers-745d85c999-", Namespace:"calico-system", SelfLink:"", UID:"12625ef2-af4d-498a-be42-4bc310bbd487", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745d85c999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e", Pod:"calico-kube-controllers-745d85c999-h9vmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali585cfbc1a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.334 [INFO][7153] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.334 [INFO][7153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" iface="eth0" netns="" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.334 [INFO][7153] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.334 [INFO][7153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.344 [INFO][7166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.344 [INFO][7166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.344 [INFO][7166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.348 [WARNING][7166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.348 [INFO][7166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.349 [INFO][7166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.350317 containerd[1908]: 2025-01-30 15:34:06.349 [INFO][7153] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.350317 containerd[1908]: time="2025-01-30T15:34:06.350313840Z" level=info msg="TearDown network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\" successfully" Jan 30 15:34:06.350640 containerd[1908]: time="2025-01-30T15:34:06.350328987Z" level=info msg="StopPodSandbox for \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\" returns successfully" Jan 30 15:34:06.350640 containerd[1908]: time="2025-01-30T15:34:06.350594766Z" level=info msg="RemovePodSandbox for \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\"" Jan 30 15:34:06.350640 containerd[1908]: time="2025-01-30T15:34:06.350609665Z" level=info msg="Forcibly stopping sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\"" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.368 [WARNING][7194] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0", GenerateName:"calico-kube-controllers-745d85c999-", Namespace:"calico-system", SelfLink:"", UID:"12625ef2-af4d-498a-be42-4bc310bbd487", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745d85c999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"40f55e571544471b00e22676057e32cb7b241887fd7ed0f5684c13cf0cb9c34e", Pod:"calico-kube-controllers-745d85c999-h9vmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali585cfbc1a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.368 [INFO][7194] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.368 [INFO][7194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" iface="eth0" netns="" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.368 [INFO][7194] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.368 [INFO][7194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.379 [INFO][7208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.379 [INFO][7208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.379 [INFO][7208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.383 [WARNING][7208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.383 [INFO][7208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" HandleID="k8s-pod-network.0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--kube--controllers--745d85c999--h9vmg-eth0" Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.384 [INFO][7208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.386327 containerd[1908]: 2025-01-30 15:34:06.385 [INFO][7194] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb" Jan 30 15:34:06.386327 containerd[1908]: time="2025-01-30T15:34:06.386284454Z" level=info msg="TearDown network for sandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\" successfully" Jan 30 15:34:06.387637 containerd[1908]: time="2025-01-30T15:34:06.387623959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:34:06.387674 containerd[1908]: time="2025-01-30T15:34:06.387649547Z" level=info msg="RemovePodSandbox \"0df16f4c9ada80701b1157ba1b20ca05113089a7897a0082d48f1d6676b084cb\" returns successfully" Jan 30 15:34:06.387907 containerd[1908]: time="2025-01-30T15:34:06.387895728Z" level=info msg="StopPodSandbox for \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\"" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.405 [WARNING][7236] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47aaec61-dc45-4322-ae57-2b2017382ed5", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5", Pod:"coredns-7db6d8ff4d-9scnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40c37a672eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.405 [INFO][7236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.405 [INFO][7236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" iface="eth0" netns="" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.405 [INFO][7236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.405 [INFO][7236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.416 [INFO][7252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.416 [INFO][7252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.416 [INFO][7252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.420 [WARNING][7252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.420 [INFO][7252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.421 [INFO][7252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.422904 containerd[1908]: 2025-01-30 15:34:06.422 [INFO][7236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.423195 containerd[1908]: time="2025-01-30T15:34:06.422925217Z" level=info msg="TearDown network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\" successfully" Jan 30 15:34:06.423195 containerd[1908]: time="2025-01-30T15:34:06.422942057Z" level=info msg="StopPodSandbox for \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\" returns successfully" Jan 30 15:34:06.423195 containerd[1908]: time="2025-01-30T15:34:06.423188227Z" level=info msg="RemovePodSandbox for \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\"" Jan 30 15:34:06.423248 containerd[1908]: time="2025-01-30T15:34:06.423204774Z" level=info msg="Forcibly stopping sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\"" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.442 [WARNING][7280] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"47aaec61-dc45-4322-ae57-2b2017382ed5", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"312b117e7a2648d4f905aabb9a651eaca0dc6926140ea793166215861c1cbde5", Pod:"coredns-7db6d8ff4d-9scnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40c37a672eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.442 [INFO][7280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.442 [INFO][7280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" iface="eth0" netns="" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.442 [INFO][7280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.442 [INFO][7280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.453 [INFO][7292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.453 [INFO][7292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.453 [INFO][7292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.456 [WARNING][7292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.457 [INFO][7292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" HandleID="k8s-pod-network.f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Workload="ci--4081.3.0--a--8297fae690-k8s-coredns--7db6d8ff4d--9scnd-eth0" Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.458 [INFO][7292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.459548 containerd[1908]: 2025-01-30 15:34:06.458 [INFO][7280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17" Jan 30 15:34:06.459858 containerd[1908]: time="2025-01-30T15:34:06.459544983Z" level=info msg="TearDown network for sandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\" successfully" Jan 30 15:34:06.460788 containerd[1908]: time="2025-01-30T15:34:06.460774522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:34:06.460827 containerd[1908]: time="2025-01-30T15:34:06.460800109Z" level=info msg="RemovePodSandbox \"f89d825ec810c4b25cc77f5f646deb30ba84887ed4e6a760aaed7ab997764a17\" returns successfully" Jan 30 15:34:06.461064 containerd[1908]: time="2025-01-30T15:34:06.461052593Z" level=info msg="StopPodSandbox for \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\"" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.479 [WARNING][7320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5864f7d5-fb06-43dd-b6d9-86f374c2cf41", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a", Pod:"csi-node-driver-bs9r5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d919b63bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.479 [INFO][7320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.479 [INFO][7320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" iface="eth0" netns="" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.479 [INFO][7320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.479 [INFO][7320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.489 [INFO][7334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.489 [INFO][7334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.489 [INFO][7334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.493 [WARNING][7334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.493 [INFO][7334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.494 [INFO][7334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.495652 containerd[1908]: 2025-01-30 15:34:06.495 [INFO][7320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.495952 containerd[1908]: time="2025-01-30T15:34:06.495668209Z" level=info msg="TearDown network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\" successfully" Jan 30 15:34:06.495952 containerd[1908]: time="2025-01-30T15:34:06.495684193Z" level=info msg="StopPodSandbox for \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\" returns successfully" Jan 30 15:34:06.495952 containerd[1908]: time="2025-01-30T15:34:06.495936493Z" level=info msg="RemovePodSandbox for \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\"" Jan 30 15:34:06.496005 containerd[1908]: time="2025-01-30T15:34:06.495951610Z" level=info msg="Forcibly stopping sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\"" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.515 [WARNING][7363] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5864f7d5-fb06-43dd-b6d9-86f374c2cf41", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"28c683ab00c04e0d6e6fe698012d22e8b7baa3b98a193adef33bf1433e67009a", Pod:"csi-node-driver-bs9r5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d919b63bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.515 [INFO][7363] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.515 [INFO][7363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" iface="eth0" netns="" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.515 [INFO][7363] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.515 [INFO][7363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.527 [INFO][7377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.527 [INFO][7377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.527 [INFO][7377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.531 [WARNING][7377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.531 [INFO][7377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" HandleID="k8s-pod-network.9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Workload="ci--4081.3.0--a--8297fae690-k8s-csi--node--driver--bs9r5-eth0" Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.532 [INFO][7377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.534229 containerd[1908]: 2025-01-30 15:34:06.533 [INFO][7363] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4" Jan 30 15:34:06.534229 containerd[1908]: time="2025-01-30T15:34:06.534224114Z" level=info msg="TearDown network for sandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\" successfully" Jan 30 15:34:06.535579 containerd[1908]: time="2025-01-30T15:34:06.535566904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:34:06.535616 containerd[1908]: time="2025-01-30T15:34:06.535599144Z" level=info msg="RemovePodSandbox \"9b77187c18dc45664126a345a0631586b9101cfe60618fd38b243539015d66a4\" returns successfully" Jan 30 15:34:06.535880 containerd[1908]: time="2025-01-30T15:34:06.535869512Z" level=info msg="StopPodSandbox for \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\"" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.553 [WARNING][7405] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"a950d8fb-24c2-4d89-81c7-1f97e95d8e16", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b", Pod:"calico-apiserver-684d4dff56-gv4f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac3c818f711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.553 [INFO][7405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.553 [INFO][7405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" iface="eth0" netns="" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.553 [INFO][7405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.553 [INFO][7405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.565 [INFO][7418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.565 [INFO][7418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.565 [INFO][7418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.569 [WARNING][7418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.569 [INFO][7418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.570 [INFO][7418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.571660 containerd[1908]: 2025-01-30 15:34:06.570 [INFO][7405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.571660 containerd[1908]: time="2025-01-30T15:34:06.571655103Z" level=info msg="TearDown network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\" successfully" Jan 30 15:34:06.572005 containerd[1908]: time="2025-01-30T15:34:06.571672109Z" level=info msg="StopPodSandbox for \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\" returns successfully" Jan 30 15:34:06.572005 containerd[1908]: time="2025-01-30T15:34:06.571989944Z" level=info msg="RemovePodSandbox for \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\"" Jan 30 15:34:06.572046 containerd[1908]: time="2025-01-30T15:34:06.572007428Z" level=info msg="Forcibly stopping sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\"" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.593 [WARNING][7445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0", GenerateName:"calico-apiserver-684d4dff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"a950d8fb-24c2-4d89-81c7-1f97e95d8e16", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 15, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684d4dff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-8297fae690", ContainerID:"ccfab53cfedfa205954d59a076b801ad9f1e52635a5200462c9850109c44515b", Pod:"calico-apiserver-684d4dff56-gv4f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac3c818f711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.593 [INFO][7445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.593 [INFO][7445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" iface="eth0" netns="" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.593 [INFO][7445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.593 [INFO][7445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.603 [INFO][7460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.603 [INFO][7460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.603 [INFO][7460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.607 [WARNING][7460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.607 [INFO][7460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" HandleID="k8s-pod-network.fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Workload="ci--4081.3.0--a--8297fae690-k8s-calico--apiserver--684d4dff56--gv4f8-eth0" Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.608 [INFO][7460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 15:34:06.609545 containerd[1908]: 2025-01-30 15:34:06.608 [INFO][7445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca" Jan 30 15:34:06.609841 containerd[1908]: time="2025-01-30T15:34:06.609578568Z" level=info msg="TearDown network for sandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\" successfully" Jan 30 15:34:06.610786 containerd[1908]: time="2025-01-30T15:34:06.610774296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:34:06.610823 containerd[1908]: time="2025-01-30T15:34:06.610800283Z" level=info msg="RemovePodSandbox \"fbe55512597425dec862f46d9c1bb04d3f84a5ffba6a055b7b680afe82521aca\" returns successfully" Jan 30 15:34:19.797186 kubelet[3399]: I0130 15:34:19.797073 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:35:29.405922 systemd[1]: Started sshd@10-139.178.70.183:22-74.82.47.5:18852.service - OpenSSH per-connection server daemon (74.82.47.5:18852). Jan 30 15:35:29.412167 sshd[7692]: banner exchange: Connection from 74.82.47.5 port 18852: invalid format Jan 30 15:35:29.412494 systemd[1]: sshd@10-139.178.70.183:22-74.82.47.5:18852.service: Deactivated successfully. Jan 30 15:38:18.693521 update_engine[1901]: I20250130 15:38:18.693384 1901 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 15:38:18.693521 update_engine[1901]: I20250130 15:38:18.693485 1901 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.693917 1901 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.694993 1901 omaha_request_params.cc:62] Current group set to lts Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.695240 1901 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.695269 1901 update_attempter.cc:643] Scheduling an action processor start. Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.695307 1901 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.695379 1901 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.695559 1901 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.695595 1901 omaha_request_action.cc:272] Request: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: Jan 30 15:38:18.698722 update_engine[1901]: I20250130 15:38:18.695612 1901 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:38:18.699252 locksmithd[1939]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 15:38:18.699527 update_engine[1901]: I20250130 15:38:18.698801 1901 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:38:18.699527 update_engine[1901]: I20250130 15:38:18.699101 1901 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:38:18.699811 update_engine[1901]: E20250130 15:38:18.699752 1901 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:38:18.699887 update_engine[1901]: I20250130 15:38:18.699824 1901 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 15:38:28.701783 update_engine[1901]: I20250130 15:38:28.701607 1901 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:38:28.702830 update_engine[1901]: I20250130 15:38:28.702158 1901 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:38:28.702830 update_engine[1901]: I20250130 15:38:28.702672 1901 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:38:28.703356 update_engine[1901]: E20250130 15:38:28.703267 1901 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:38:28.703520 update_engine[1901]: I20250130 15:38:28.703400 1901 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 15:38:38.701864 update_engine[1901]: I20250130 15:38:38.701603 1901 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:38:38.702993 update_engine[1901]: I20250130 15:38:38.702157 1901 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:38:38.702993 update_engine[1901]: I20250130 15:38:38.702698 1901 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:38:38.703790 update_engine[1901]: E20250130 15:38:38.703666 1901 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:38:38.703974 update_engine[1901]: I20250130 15:38:38.703812 1901 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 15:38:48.694673 update_engine[1901]: I20250130 15:38:48.694471 1901 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:38:48.695791 update_engine[1901]: I20250130 15:38:48.695050 1901 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:38:48.695791 update_engine[1901]: I20250130 15:38:48.695574 1901 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:38:48.696522 update_engine[1901]: E20250130 15:38:48.696393 1901 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:38:48.696759 update_engine[1901]: I20250130 15:38:48.696532 1901 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 15:38:48.696759 update_engine[1901]: I20250130 15:38:48.696591 1901 omaha_request_action.cc:617] Omaha request response: Jan 30 15:38:48.696953 update_engine[1901]: E20250130 15:38:48.696749 1901 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 15:38:48.696953 update_engine[1901]: I20250130 15:38:48.696801 1901 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 15:38:48.696953 update_engine[1901]: I20250130 15:38:48.696820 1901 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 15:38:48.696953 update_engine[1901]: I20250130 15:38:48.696836 1901 update_attempter.cc:306] Processing Done. Jan 30 15:38:48.696953 update_engine[1901]: E20250130 15:38:48.696866 1901 update_attempter.cc:619] Update failed. Jan 30 15:38:48.696953 update_engine[1901]: I20250130 15:38:48.696882 1901 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 15:38:48.696953 update_engine[1901]: I20250130 15:38:48.696897 1901 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 15:38:48.696953 update_engine[1901]: I20250130 15:38:48.696913 1901 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 15:38:48.697674 update_engine[1901]: I20250130 15:38:48.697068 1901 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 15:38:48.697674 update_engine[1901]: I20250130 15:38:48.697134 1901 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 15:38:48.697674 update_engine[1901]: I20250130 15:38:48.697152 1901 omaha_request_action.cc:272] Request: Jan 30 15:38:48.697674 update_engine[1901]: Jan 30 15:38:48.697674 update_engine[1901]: Jan 30 15:38:48.697674 update_engine[1901]: Jan 30 15:38:48.697674 update_engine[1901]: Jan 30 15:38:48.697674 update_engine[1901]: Jan 30 15:38:48.697674 update_engine[1901]: Jan 30 15:38:48.697674 update_engine[1901]: I20250130 15:38:48.697168 1901 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 15:38:48.697674 update_engine[1901]: I20250130 15:38:48.697583 1901 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 15:38:48.698568 update_engine[1901]: I20250130 15:38:48.698002 1901 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 15:38:48.698663 locksmithd[1939]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 15:38:48.699365 update_engine[1901]: E20250130 15:38:48.698650 1901 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 15:38:48.699365 update_engine[1901]: I20250130 15:38:48.698793 1901 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 15:38:48.699365 update_engine[1901]: I20250130 15:38:48.698822 1901 omaha_request_action.cc:617] Omaha request response: Jan 30 15:38:48.699365 update_engine[1901]: I20250130 15:38:48.698843 1901 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 15:38:48.699365 update_engine[1901]: I20250130 15:38:48.698856 1901 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 15:38:48.699365 update_engine[1901]: I20250130 15:38:48.698870 1901 update_attempter.cc:306] Processing Done. Jan 30 15:38:48.699365 update_engine[1901]: I20250130 15:38:48.698887 1901 update_attempter.cc:310] Error event sent. Jan 30 15:38:48.699365 update_engine[1901]: I20250130 15:38:48.698920 1901 update_check_scheduler.cc:74] Next update check in 46m21s Jan 30 15:38:48.700069 locksmithd[1939]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 15:39:02.303257 systemd[1]: Started sshd@11-139.178.70.183:22-147.75.109.163:45684.service - OpenSSH per-connection server daemon (147.75.109.163:45684). Jan 30 15:39:02.395585 sshd[8171]: Accepted publickey for core from 147.75.109.163 port 45684 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:02.396474 sshd[8171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:02.399882 systemd-logind[1895]: New session 12 of user core. Jan 30 15:39:02.413230 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 15:39:02.549862 sshd[8171]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:02.551425 systemd[1]: sshd@11-139.178.70.183:22-147.75.109.163:45684.service: Deactivated successfully. Jan 30 15:39:02.552912 systemd-logind[1895]: Session 12 logged out. Waiting for processes to exit. Jan 30 15:39:02.552958 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 15:39:02.553451 systemd-logind[1895]: Removed session 12. Jan 30 15:39:07.569723 systemd[1]: Started sshd@12-139.178.70.183:22-147.75.109.163:39260.service - OpenSSH per-connection server daemon (147.75.109.163:39260). Jan 30 15:39:07.624260 sshd[8204]: Accepted publickey for core from 147.75.109.163 port 39260 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:07.625959 sshd[8204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:07.631604 systemd-logind[1895]: New session 13 of user core. Jan 30 15:39:07.649953 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 15:39:07.747789 sshd[8204]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:07.749509 systemd[1]: sshd@12-139.178.70.183:22-147.75.109.163:39260.service: Deactivated successfully. Jan 30 15:39:07.751098 systemd-logind[1895]: Session 13 logged out. Waiting for processes to exit. Jan 30 15:39:07.751227 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 15:39:07.752013 systemd-logind[1895]: Removed session 13. Jan 30 15:39:12.762744 systemd[1]: Started sshd@13-139.178.70.183:22-147.75.109.163:39266.service - OpenSSH per-connection server daemon (147.75.109.163:39266). Jan 30 15:39:12.788544 sshd[8234]: Accepted publickey for core from 147.75.109.163 port 39266 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:12.789233 sshd[8234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:12.791821 systemd-logind[1895]: New session 14 of user core. Jan 30 15:39:12.801733 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 15:39:12.886511 sshd[8234]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:12.896870 systemd[1]: Started sshd@14-139.178.70.183:22-147.75.109.163:39282.service - OpenSSH per-connection server daemon (147.75.109.163:39282). Jan 30 15:39:12.897175 systemd[1]: sshd@13-139.178.70.183:22-147.75.109.163:39266.service: Deactivated successfully. Jan 30 15:39:12.898190 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 15:39:12.898911 systemd-logind[1895]: Session 14 logged out. Waiting for processes to exit. Jan 30 15:39:12.899510 systemd-logind[1895]: Removed session 14. Jan 30 15:39:12.922735 sshd[8259]: Accepted publickey for core from 147.75.109.163 port 39282 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:12.923511 sshd[8259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:12.926210 systemd-logind[1895]: New session 15 of user core. Jan 30 15:39:12.939677 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 15:39:13.038478 sshd[8259]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:13.055823 systemd[1]: Started sshd@15-139.178.70.183:22-147.75.109.163:39292.service - OpenSSH per-connection server daemon (147.75.109.163:39292). Jan 30 15:39:13.056087 systemd[1]: sshd@14-139.178.70.183:22-147.75.109.163:39282.service: Deactivated successfully. Jan 30 15:39:13.057273 systemd-logind[1895]: Session 15 logged out. Waiting for processes to exit. Jan 30 15:39:13.057586 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 15:39:13.058216 systemd-logind[1895]: Removed session 15. Jan 30 15:39:13.082464 sshd[8285]: Accepted publickey for core from 147.75.109.163 port 39292 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:13.083207 sshd[8285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:13.086002 systemd-logind[1895]: New session 16 of user core. Jan 30 15:39:13.086575 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 15:39:13.205632 sshd[8285]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:13.208256 systemd[1]: sshd@15-139.178.70.183:22-147.75.109.163:39292.service: Deactivated successfully. Jan 30 15:39:13.209833 systemd-logind[1895]: Session 16 logged out. Waiting for processes to exit. Jan 30 15:39:13.209841 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 15:39:13.210619 systemd-logind[1895]: Removed session 16. Jan 30 15:39:18.230819 systemd[1]: Started sshd@16-139.178.70.183:22-147.75.109.163:35610.service - OpenSSH per-connection server daemon (147.75.109.163:35610). Jan 30 15:39:18.257521 sshd[8348]: Accepted publickey for core from 147.75.109.163 port 35610 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:18.258214 sshd[8348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:18.260844 systemd-logind[1895]: New session 17 of user core. Jan 30 15:39:18.278738 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 15:39:18.365638 sshd[8348]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:18.367231 systemd[1]: sshd@16-139.178.70.183:22-147.75.109.163:35610.service: Deactivated successfully. Jan 30 15:39:18.368546 systemd-logind[1895]: Session 17 logged out. Waiting for processes to exit. Jan 30 15:39:18.368667 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 15:39:18.369256 systemd-logind[1895]: Removed session 17. Jan 30 15:39:23.381827 systemd[1]: Started sshd@17-139.178.70.183:22-147.75.109.163:35616.service - OpenSSH per-connection server daemon (147.75.109.163:35616). Jan 30 15:39:23.408133 sshd[8377]: Accepted publickey for core from 147.75.109.163 port 35616 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:23.408877 sshd[8377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:23.411423 systemd-logind[1895]: New session 18 of user core. Jan 30 15:39:23.422725 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 15:39:23.509020 sshd[8377]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:23.510510 systemd[1]: sshd@17-139.178.70.183:22-147.75.109.163:35616.service: Deactivated successfully. Jan 30 15:39:23.512010 systemd-logind[1895]: Session 18 logged out. Waiting for processes to exit. Jan 30 15:39:23.512067 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 15:39:23.512587 systemd-logind[1895]: Removed session 18. Jan 30 15:39:28.534897 systemd[1]: Started sshd@18-139.178.70.183:22-147.75.109.163:53586.service - OpenSSH per-connection server daemon (147.75.109.163:53586). Jan 30 15:39:28.561337 sshd[8404]: Accepted publickey for core from 147.75.109.163 port 53586 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:28.562030 sshd[8404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:28.564764 systemd-logind[1895]: New session 19 of user core. Jan 30 15:39:28.573794 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 15:39:28.660089 sshd[8404]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:28.662147 systemd[1]: sshd@18-139.178.70.183:22-147.75.109.163:53586.service: Deactivated successfully. Jan 30 15:39:28.663241 systemd-logind[1895]: Session 19 logged out. Waiting for processes to exit. Jan 30 15:39:28.663260 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 15:39:28.663848 systemd-logind[1895]: Removed session 19. Jan 30 15:39:33.681236 systemd[1]: Started sshd@19-139.178.70.183:22-147.75.109.163:53598.service - OpenSSH per-connection server daemon (147.75.109.163:53598). Jan 30 15:39:33.709515 sshd[8478]: Accepted publickey for core from 147.75.109.163 port 53598 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:33.710135 sshd[8478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:33.712793 systemd-logind[1895]: New session 20 of user core. Jan 30 15:39:33.725851 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 15:39:33.808282 sshd[8478]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:33.820915 systemd[1]: Started sshd@20-139.178.70.183:22-147.75.109.163:53606.service - OpenSSH per-connection server daemon (147.75.109.163:53606). Jan 30 15:39:33.821212 systemd[1]: sshd@19-139.178.70.183:22-147.75.109.163:53598.service: Deactivated successfully. Jan 30 15:39:33.822162 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 15:39:33.823032 systemd-logind[1895]: Session 20 logged out. Waiting for processes to exit. Jan 30 15:39:33.823726 systemd-logind[1895]: Removed session 20. Jan 30 15:39:33.846803 sshd[8502]: Accepted publickey for core from 147.75.109.163 port 53606 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:33.847469 sshd[8502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:33.850045 systemd-logind[1895]: New session 21 of user core. Jan 30 15:39:33.862885 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 15:39:34.054785 sshd[8502]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:34.069233 systemd[1]: Started sshd@21-139.178.70.183:22-147.75.109.163:53620.service - OpenSSH per-connection server daemon (147.75.109.163:53620). Jan 30 15:39:34.070667 systemd[1]: sshd@20-139.178.70.183:22-147.75.109.163:53606.service: Deactivated successfully. Jan 30 15:39:34.074823 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 15:39:34.078349 systemd-logind[1895]: Session 21 logged out. Waiting for processes to exit. Jan 30 15:39:34.081212 systemd-logind[1895]: Removed session 21. Jan 30 15:39:34.154994 sshd[8524]: Accepted publickey for core from 147.75.109.163 port 53620 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:34.156145 sshd[8524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:34.160200 systemd-logind[1895]: New session 22 of user core. Jan 30 15:39:34.168856 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 15:39:35.300203 sshd[8524]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:35.307737 systemd[1]: Started sshd@22-139.178.70.183:22-147.75.109.163:53624.service - OpenSSH per-connection server daemon (147.75.109.163:53624). Jan 30 15:39:35.308079 systemd[1]: sshd@21-139.178.70.183:22-147.75.109.163:53620.service: Deactivated successfully. Jan 30 15:39:35.309571 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 15:39:35.310523 systemd-logind[1895]: Session 22 logged out. Waiting for processes to exit. Jan 30 15:39:35.311271 systemd-logind[1895]: Removed session 22. Jan 30 15:39:35.334209 sshd[8557]: Accepted publickey for core from 147.75.109.163 port 53624 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:35.335045 sshd[8557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:35.337798 systemd-logind[1895]: New session 23 of user core. Jan 30 15:39:35.350893 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 15:39:35.516265 sshd[8557]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:35.531756 systemd[1]: Started sshd@23-139.178.70.183:22-147.75.109.163:53634.service - OpenSSH per-connection server daemon (147.75.109.163:53634). Jan 30 15:39:35.532206 systemd[1]: sshd@22-139.178.70.183:22-147.75.109.163:53624.service: Deactivated successfully. Jan 30 15:39:35.533270 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 15:39:35.534105 systemd-logind[1895]: Session 23 logged out. Waiting for processes to exit. Jan 30 15:39:35.534708 systemd-logind[1895]: Removed session 23. Jan 30 15:39:35.559438 sshd[8583]: Accepted publickey for core from 147.75.109.163 port 53634 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:35.560201 sshd[8583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:35.563235 systemd-logind[1895]: New session 24 of user core. Jan 30 15:39:35.575695 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 15:39:35.698559 sshd[8583]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:35.700163 systemd[1]: sshd@23-139.178.70.183:22-147.75.109.163:53634.service: Deactivated successfully. Jan 30 15:39:35.701554 systemd-logind[1895]: Session 24 logged out. Waiting for processes to exit. Jan 30 15:39:35.701695 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 15:39:35.702215 systemd-logind[1895]: Removed session 24. Jan 30 15:39:40.721803 systemd[1]: Started sshd@24-139.178.70.183:22-147.75.109.163:35728.service - OpenSSH per-connection server daemon (147.75.109.163:35728). Jan 30 15:39:40.748072 sshd[8619]: Accepted publickey for core from 147.75.109.163 port 35728 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:40.748733 sshd[8619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:40.751286 systemd-logind[1895]: New session 25 of user core. Jan 30 15:39:40.762811 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 15:39:40.845098 sshd[8619]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:40.846689 systemd[1]: sshd@24-139.178.70.183:22-147.75.109.163:35728.service: Deactivated successfully. Jan 30 15:39:40.848103 systemd-logind[1895]: Session 25 logged out. Waiting for processes to exit. Jan 30 15:39:40.848143 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 15:39:40.848757 systemd-logind[1895]: Removed session 25. Jan 30 15:39:45.870803 systemd[1]: Started sshd@25-139.178.70.183:22-147.75.109.163:35744.service - OpenSSH per-connection server daemon (147.75.109.163:35744). Jan 30 15:39:45.897055 sshd[8644]: Accepted publickey for core from 147.75.109.163 port 35744 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:45.897769 sshd[8644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:45.900242 systemd-logind[1895]: New session 26 of user core. Jan 30 15:39:45.916794 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 15:39:46.001682 sshd[8644]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:46.003170 systemd[1]: sshd@25-139.178.70.183:22-147.75.109.163:35744.service: Deactivated successfully. Jan 30 15:39:46.004589 systemd-logind[1895]: Session 26 logged out. Waiting for processes to exit. Jan 30 15:39:46.004719 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 15:39:46.005336 systemd-logind[1895]: Removed session 26. Jan 30 15:39:51.024262 systemd[1]: Started sshd@26-139.178.70.183:22-147.75.109.163:43242.service - OpenSSH per-connection server daemon (147.75.109.163:43242). Jan 30 15:39:51.076429 sshd[8689]: Accepted publickey for core from 147.75.109.163 port 43242 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 15:39:51.077101 sshd[8689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:39:51.079637 systemd-logind[1895]: New session 27 of user core. Jan 30 15:39:51.099824 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 15:39:51.184664 sshd[8689]: pam_unix(sshd:session): session closed for user core Jan 30 15:39:51.186346 systemd[1]: sshd@26-139.178.70.183:22-147.75.109.163:43242.service: Deactivated successfully. Jan 30 15:39:51.187860 systemd-logind[1895]: Session 27 logged out. Waiting for processes to exit. Jan 30 15:39:51.187954 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 15:39:51.188484 systemd-logind[1895]: Removed session 27.