Jan 30 14:10:00.002317 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 30 14:10:00.002332 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:10:00.002339 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:10:00.002344 kernel: BIOS-provided physical RAM map: Jan 30 14:10:00.002348 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 14:10:00.002352 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 14:10:00.002357 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 14:10:00.002361 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 14:10:00.002365 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 14:10:00.002369 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cbfff] usable Jan 30 14:10:00.002373 kernel: BIOS-e820: [mem 0x00000000819cc000-0x00000000819ccfff] ACPI NVS Jan 30 14:10:00.002378 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] reserved Jan 30 14:10:00.002382 kernel: BIOS-e820: [mem 0x00000000819ce000-0x000000008afccfff] usable Jan 30 14:10:00.002386 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 14:10:00.002391 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 14:10:00.002396 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 14:10:00.002402 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 14:10:00.002406 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 14:10:00.002411 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 14:10:00.002415 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 14:10:00.002420 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 14:10:00.002424 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 14:10:00.002429 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 14:10:00.002433 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 14:10:00.002438 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 14:10:00.002443 kernel: NX (Execute Disable) protection: active Jan 30 14:10:00.002447 kernel: APIC: Static calls initialized Jan 30 14:10:00.002452 kernel: SMBIOS 3.2.1 present. Jan 30 14:10:00.002457 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 14:10:00.002462 kernel: tsc: Detected 3400.000 MHz processor Jan 30 14:10:00.002466 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 14:10:00.002471 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:10:00.002476 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:10:00.002481 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 14:10:00.002486 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 14:10:00.002491 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:10:00.002496 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 14:10:00.002501 kernel: Using GB pages for direct mapping Jan 30 14:10:00.002506 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:00.002511 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 14:10:00.002517 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 14:10:00.002522 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 14:10:00.002527 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 14:10:00.002533 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 14:10:00.002538 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 14:10:00.002543 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 14:10:00.002548 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 14:10:00.002553 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 14:10:00.002558 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 14:10:00.002563 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 14:10:00.002568 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 14:10:00.002574 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 14:10:00.002579 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002584 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 14:10:00.002589 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 14:10:00.002594 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002599 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002604 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 14:10:00.002609 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 14:10:00.002614 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002620 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002625 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 14:10:00.002630 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:10:00.002635 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 14:10:00.002640 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 14:10:00.002645 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 14:10:00.002650 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 14:10:00.002655 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 14:10:00.002664 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 14:10:00.002669 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 14:10:00.002674 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 14:10:00.002679 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 14:10:00.002684 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 14:10:00.002689 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 14:10:00.002694 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 14:10:00.002699 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 14:10:00.002704 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 14:10:00.002710 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 14:10:00.002715 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 14:10:00.002720 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 14:10:00.002725 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 14:10:00.002730 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 14:10:00.002735 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 14:10:00.002740 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 14:10:00.002745 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 14:10:00.002749 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 14:10:00.002755 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 14:10:00.002760 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 14:10:00.002765 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 14:10:00.002770 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 14:10:00.002775 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 14:10:00.002780 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 14:10:00.002785 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 14:10:00.002790 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 14:10:00.002794 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 14:10:00.002799 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 14:10:00.002805 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 14:10:00.002810 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 14:10:00.002815 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 14:10:00.002820 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 14:10:00.002825 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 14:10:00.002830 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 14:10:00.002835 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 14:10:00.002840 kernel: No NUMA configuration found Jan 30 14:10:00.002845 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 14:10:00.002851 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 14:10:00.002856 kernel: Zone ranges: Jan 30 14:10:00.002861 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:10:00.002866 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 14:10:00.002871 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:10:00.002876 kernel: Movable zone start for each node Jan 30 14:10:00.002881 kernel: Early memory node ranges Jan 30 14:10:00.002886 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 14:10:00.002891 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 14:10:00.002897 kernel: node 0: [mem 0x0000000040400000-0x00000000819cbfff] Jan 30 14:10:00.002902 kernel: node 0: [mem 0x00000000819ce000-0x000000008afccfff] Jan 30 14:10:00.002907 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 14:10:00.002912 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 14:10:00.002921 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:10:00.002926 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 14:10:00.002932 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:10:00.002937 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 14:10:00.002943 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 14:10:00.002948 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 14:10:00.002954 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 14:10:00.002959 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 14:10:00.002965 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 14:10:00.002970 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 14:10:00.002975 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 14:10:00.002981 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 14:10:00.002986 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 14:10:00.002992 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 14:10:00.002997 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 14:10:00.003003 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 14:10:00.003008 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 14:10:00.003013 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 14:10:00.003018 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 14:10:00.003024 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 14:10:00.003029 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 14:10:00.003034 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 14:10:00.003039 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 14:10:00.003046 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 14:10:00.003051 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 14:10:00.003056 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 14:10:00.003061 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 14:10:00.003067 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 14:10:00.003072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:10:00.003077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:10:00.003083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:10:00.003088 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 14:10:00.003094 kernel: TSC deadline timer available Jan 30 14:10:00.003099 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 14:10:00.003105 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 14:10:00.003110 kernel: Booting paravirtualized kernel on bare hardware Jan 30 14:10:00.003116 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:10:00.003121 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 14:10:00.003126 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 14:10:00.003132 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 14:10:00.003137 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 14:10:00.003144 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:10:00.003149 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:00.003154 kernel: random: crng init done Jan 30 14:10:00.003160 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 14:10:00.003165 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 14:10:00.003170 kernel: Fallback order for Node 0: 0 Jan 30 14:10:00.003176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 14:10:00.003182 kernel: Policy zone: Normal Jan 30 14:10:00.003187 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:00.003193 kernel: software IO TLB: area num 16. Jan 30 14:10:00.003198 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 732416K reserved, 0K cma-reserved) Jan 30 14:10:00.003204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 14:10:00.003209 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:10:00.003215 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:10:00.003220 kernel: Dynamic Preempt: voluntary Jan 30 14:10:00.003225 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:00.003232 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:00.003237 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 14:10:00.003243 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:00.003248 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:10:00.003253 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:00.003259 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:00.003264 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 14:10:00.003269 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 14:10:00.003275 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:00.003280 kernel: Console: colour dummy device 80x25 Jan 30 14:10:00.003286 kernel: printk: console [tty0] enabled Jan 30 14:10:00.003291 kernel: printk: console [ttyS1] enabled Jan 30 14:10:00.003297 kernel: ACPI: Core revision 20230628 Jan 30 14:10:00.003302 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 14:10:00.003307 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:10:00.003313 kernel: DMAR: Host address width 39 Jan 30 14:10:00.003318 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 14:10:00.003323 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 14:10:00.003329 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 14:10:00.003335 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 14:10:00.003340 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 14:10:00.003346 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 14:10:00.003351 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 14:10:00.003356 kernel: x2apic enabled Jan 30 14:10:00.003362 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 14:10:00.003367 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 14:10:00.003373 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 14:10:00.003378 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 14:10:00.003384 kernel: process: using mwait in idle threads Jan 30 14:10:00.003389 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 14:10:00.003395 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 14:10:00.003400 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:10:00.003405 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 14:10:00.003410 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 14:10:00.003415 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 14:10:00.003421 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:10:00.003426 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 14:10:00.003431 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 14:10:00.003436 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:10:00.003443 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:10:00.003448 kernel: TAA: Mitigation: TSX disabled Jan 30 14:10:00.003453 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 14:10:00.003458 kernel: SRBDS: Mitigation: Microcode Jan 30 14:10:00.003464 kernel: GDS: Mitigation: Microcode Jan 30 14:10:00.003469 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:10:00.003474 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:10:00.003479 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:10:00.003485 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 14:10:00.003490 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 14:10:00.003495 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:10:00.003501 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 14:10:00.003507 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 14:10:00.003512 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 14:10:00.003517 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:10:00.003523 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:00.003528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:00.003533 kernel: landlock: Up and running. Jan 30 14:10:00.003538 kernel: SELinux: Initializing. Jan 30 14:10:00.003544 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.003549 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.003554 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 14:10:00.003561 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:10:00.003566 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:10:00.003571 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:10:00.003577 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 14:10:00.003582 kernel: ... version: 4 Jan 30 14:10:00.003587 kernel: ... bit width: 48 Jan 30 14:10:00.003593 kernel: ... generic registers: 4 Jan 30 14:10:00.003598 kernel: ... value mask: 0000ffffffffffff Jan 30 14:10:00.003603 kernel: ... max period: 00007fffffffffff Jan 30 14:10:00.003610 kernel: ... fixed-purpose events: 3 Jan 30 14:10:00.003615 kernel: ... event mask: 000000070000000f Jan 30 14:10:00.003620 kernel: signal: max sigframe size: 2032 Jan 30 14:10:00.003626 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 14:10:00.003631 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:00.003636 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:00.003642 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 14:10:00.003647 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:00.003652 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:10:00.003664 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 14:10:00.003670 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 14:10:00.003675 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 14:10:00.003681 kernel: smpboot: Max logical packages: 1 Jan 30 14:10:00.003686 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 14:10:00.003692 kernel: devtmpfs: initialized Jan 30 14:10:00.003718 kernel: x86/mm: Memory block size: 128MB Jan 30 14:10:00.003724 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cc000-0x819ccfff] (4096 bytes) Jan 30 14:10:00.003746 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 14:10:00.003752 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:00.003757 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 14:10:00.003763 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:00.003768 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:00.003773 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:00.003779 kernel: audit: type=2000 audit(1738246194.040:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:00.003784 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:00.003789 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:10:00.003794 kernel: cpuidle: using governor menu Jan 30 14:10:00.003801 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:00.003806 kernel: dca service started, version 1.12.1 Jan 30 14:10:00.003811 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 14:10:00.003817 kernel: PCI: Using configuration type 1 for base access Jan 30 14:10:00.003822 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 14:10:00.003827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:10:00.003833 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:00.003838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:00.003843 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:00.003849 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:00.003855 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:00.003860 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:00.003865 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:00.003870 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:00.003876 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 14:10:00.003881 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003886 kernel: ACPI: SSDT 0xFFFF972C81604800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 14:10:00.003892 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003898 kernel: ACPI: SSDT 0xFFFF972C815FB800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 14:10:00.003903 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003909 kernel: ACPI: SSDT 0xFFFF972C815E4800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 14:10:00.003914 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003919 kernel: ACPI: SSDT 0xFFFF972C815F8000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 14:10:00.003924 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003930 kernel: ACPI: SSDT 0xFFFF972C8160E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 14:10:00.003935 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003940 kernel: ACPI: SSDT 0xFFFF972C80EEC400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 14:10:00.003946 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 14:10:00.003952 kernel: ACPI: Interpreter enabled Jan 30 14:10:00.003957 kernel: ACPI: PM: (supports S0 S5) Jan 30 14:10:00.003962 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:10:00.003968 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 14:10:00.003973 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 14:10:00.003978 kernel: HEST: Table parsing has been initialized. Jan 30 14:10:00.003983 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 14:10:00.003989 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:10:00.003995 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:10:00.004000 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 14:10:00.004006 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 14:10:00.004011 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 14:10:00.004017 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 14:10:00.004022 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 14:10:00.004027 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 14:10:00.004032 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 14:10:00.004038 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 14:10:00.004044 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 14:10:00.004049 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 14:10:00.004055 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 14:10:00.004060 kernel: ACPI: \PIN_: New power resource Jan 30 14:10:00.004065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 14:10:00.004136 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:10:00.004189 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 14:10:00.004235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 14:10:00.004245 kernel: PCI host bridge to bus 0000:00 Jan 30 14:10:00.004295 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:10:00.004338 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:10:00.004380 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:10:00.004421 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 14:10:00.004462 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 14:10:00.004502 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 14:10:00.004560 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 14:10:00.004615 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 14:10:00.004668 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.004720 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 14:10:00.004769 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 14:10:00.004819 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 14:10:00.004870 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 14:10:00.004920 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 14:10:00.004968 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 14:10:00.005015 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 14:10:00.005066 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 14:10:00.005113 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 14:10:00.005161 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 14:10:00.005211 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 14:10:00.005257 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:10:00.005310 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 14:10:00.005356 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:10:00.005407 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 14:10:00.005454 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 14:10:00.005503 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 14:10:00.005559 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 14:10:00.005608 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 14:10:00.005654 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 14:10:00.005708 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 14:10:00.005754 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 14:10:00.005804 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 14:10:00.005854 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 14:10:00.005901 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 14:10:00.005949 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 14:10:00.005994 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 14:10:00.006044 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 14:10:00.006089 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 14:10:00.006140 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 14:10:00.006186 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 14:10:00.006238 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 14:10:00.006285 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006342 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 14:10:00.006389 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006440 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 14:10:00.006486 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006538 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 14:10:00.006584 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006638 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 14:10:00.006689 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006739 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 14:10:00.006786 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:10:00.006838 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 14:10:00.006889 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 14:10:00.006937 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 14:10:00.006985 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 14:10:00.007036 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 14:10:00.007084 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 14:10:00.007137 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 14:10:00.007186 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 14:10:00.007236 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 14:10:00.007284 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 14:10:00.007333 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:10:00.007380 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:10:00.007433 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 14:10:00.007481 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 14:10:00.007529 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 14:10:00.007578 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 14:10:00.007627 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:10:00.007706 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:10:00.007755 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:10:00.007802 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:10:00.007849 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:10:00.007895 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:10:00.007949 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 14:10:00.008000 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:10:00.008049 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 14:10:00.008096 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 14:10:00.008145 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 14:10:00.008194 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.008241 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:10:00.008288 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:10:00.008337 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:10:00.008390 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 14:10:00.008438 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:10:00.008487 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 14:10:00.008535 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 14:10:00.008582 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 14:10:00.008631 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.008684 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:10:00.008732 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:10:00.008779 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:10:00.008827 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:10:00.008879 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 14:10:00.008929 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 14:10:00.008978 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 14:10:00.009026 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:10:00.009077 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:10:00.009125 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.009174 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.009227 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 14:10:00.009283 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 14:10:00.009334 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 14:10:00.009387 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 14:10:00.009438 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 14:10:00.009489 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:10:00.009539 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 14:10:00.009589 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:10:00.009638 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:10:00.009718 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.009767 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.009777 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 14:10:00.009783 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 14:10:00.009789 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 14:10:00.009795 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 14:10:00.009800 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 14:10:00.009806 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 14:10:00.009812 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 14:10:00.009817 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 14:10:00.009823 kernel: iommu: Default domain type: Translated Jan 30 14:10:00.009829 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:10:00.009835 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:10:00.009841 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:10:00.009846 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 14:10:00.009852 kernel: e820: reserve RAM buffer [mem 0x819cc000-0x83ffffff] Jan 30 14:10:00.009858 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 14:10:00.009864 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 14:10:00.009869 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 14:10:00.009875 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 14:10:00.009925 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 14:10:00.009975 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 14:10:00.010025 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:10:00.010033 kernel: vgaarb: loaded Jan 30 14:10:00.010039 kernel: clocksource: Switched to clocksource tsc-early Jan 30 14:10:00.010045 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:00.010051 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:00.010056 kernel: pnp: PnP ACPI init Jan 30 14:10:00.010103 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 14:10:00.010152 kernel: pnp 00:02: [dma 0 disabled] Jan 30 14:10:00.010200 kernel: pnp 00:03: [dma 0 disabled] Jan 30 14:10:00.010248 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 14:10:00.010291 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 14:10:00.010337 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 14:10:00.010382 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 14:10:00.010429 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 14:10:00.010472 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 14:10:00.010515 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 14:10:00.010560 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 14:10:00.010602 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 14:10:00.010646 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 14:10:00.010726 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 14:10:00.010777 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 14:10:00.010820 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 14:10:00.010863 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 14:10:00.010905 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 14:10:00.010947 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 14:10:00.010989 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 14:10:00.011032 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 14:10:00.011079 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 14:10:00.011088 kernel: pnp: PnP ACPI: found 10 devices Jan 30 14:10:00.011094 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:10:00.011100 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:00.011106 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:00.011111 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 14:10:00.011117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:00.011123 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:00.011130 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 14:10:00.011136 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 14:10:00.011142 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.011147 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.011154 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:00.011160 kernel: NET: Registered PF_XDP protocol family Jan 30 14:10:00.011208 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 14:10:00.011256 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 14:10:00.011306 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 14:10:00.011356 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011404 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011454 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011502 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011550 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:10:00.011596 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:10:00.011644 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:10:00.011727 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:10:00.011774 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:10:00.011821 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:10:00.011868 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:10:00.011915 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:10:00.011964 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:10:00.012011 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:10:00.012057 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:10:00.012105 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:10:00.012153 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.012201 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.012247 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:10:00.012295 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.012341 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.012387 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 14:10:00.012429 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:10:00.012471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:10:00.012512 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:10:00.012553 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 14:10:00.012594 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 14:10:00.012642 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 14:10:00.012712 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:10:00.012776 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 14:10:00.012820 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 14:10:00.012867 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:10:00.012911 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 14:10:00.012957 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 14:10:00.013003 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:10:00.013049 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 14:10:00.013094 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:10:00.013102 kernel: PCI: CLS 64 bytes, default 64 Jan 30 14:10:00.013108 kernel: DMAR: No ATSR found Jan 30 14:10:00.013114 kernel: DMAR: No SATC found Jan 30 14:10:00.013120 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 14:10:00.013166 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 14:10:00.013216 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 14:10:00.013263 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 14:10:00.013311 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 14:10:00.013357 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 14:10:00.013404 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 14:10:00.013450 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 14:10:00.013497 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 14:10:00.013544 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 14:10:00.013590 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 14:10:00.013639 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 14:10:00.013707 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 14:10:00.013768 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 14:10:00.013814 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 14:10:00.013861 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 14:10:00.013907 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 14:10:00.013954 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 14:10:00.014000 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 14:10:00.014050 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 14:10:00.014096 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 14:10:00.014144 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 14:10:00.014191 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 14:10:00.014240 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 14:10:00.014289 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 14:10:00.014337 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 14:10:00.014386 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 14:10:00.014437 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 14:10:00.014445 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 14:10:00.014451 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:10:00.014457 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 14:10:00.014463 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 14:10:00.014469 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 14:10:00.014474 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 14:10:00.014480 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 14:10:00.014528 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 14:10:00.014539 kernel: Initialise system trusted keyrings Jan 30 14:10:00.014544 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 14:10:00.014550 kernel: Key type asymmetric registered Jan 30 14:10:00.014556 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:00.014561 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:10:00.014567 kernel: io scheduler mq-deadline registered Jan 30 14:10:00.014573 kernel: io scheduler kyber registered Jan 30 14:10:00.014578 kernel: io scheduler bfq registered Jan 30 14:10:00.014625 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 14:10:00.014675 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 14:10:00.014756 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 14:10:00.014804 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 14:10:00.014851 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 14:10:00.014897 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 14:10:00.014949 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 14:10:00.014959 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 14:10:00.014965 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 14:10:00.014971 kernel: pstore: Using crash dump compression: deflate Jan 30 14:10:00.014976 kernel: pstore: Registered erst as persistent store backend Jan 30 14:10:00.014982 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:10:00.014988 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:00.014994 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:10:00.014999 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 14:10:00.015005 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 14:10:00.015055 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 14:10:00.015064 kernel: i8042: PNP: No PS/2 controller found. Jan 30 14:10:00.015107 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 14:10:00.015150 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 14:10:00.015195 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T14:09:58 UTC (1738246198) Jan 30 14:10:00.015238 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 14:10:00.015246 kernel: intel_pstate: Intel P-state driver initializing Jan 30 14:10:00.015252 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 14:10:00.015259 kernel: intel_pstate: HWP enabled Jan 30 14:10:00.015265 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 30 14:10:00.015270 kernel: vesafb: scrolling: redraw Jan 30 14:10:00.015276 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 30 14:10:00.015282 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000e64b71a3, using 768k, total 768k Jan 30 14:10:00.015287 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:00.015293 kernel: fb0: VESA VGA frame buffer device Jan 30 14:10:00.015299 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:00.015305 kernel: Segment Routing with IPv6 Jan 30 14:10:00.015311 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:00.015317 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:00.015323 kernel: Key type dns_resolver registered Jan 30 14:10:00.015329 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 14:10:00.015334 kernel: IPI shorthand broadcast: enabled Jan 30 14:10:00.015340 kernel: sched_clock: Marking stable (2540323890, 1385156211)->(4468964782, -543484681) Jan 30 14:10:00.015346 kernel: registered taskstats version 1 Jan 30 14:10:00.015352 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:00.015357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:10:00.015364 kernel: Key type .fscrypt registered Jan 30 14:10:00.015369 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:00.015375 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:00.015380 kernel: ima: No architecture policies found Jan 30 14:10:00.015386 kernel: clk: Disabling unused clocks Jan 30 14:10:00.015392 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:10:00.015398 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:10:00.015403 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:10:00.015410 kernel: Run /init as init process Jan 30 14:10:00.015416 kernel: with arguments: Jan 30 14:10:00.015421 kernel: /init Jan 30 14:10:00.015427 kernel: with environment: Jan 30 14:10:00.015432 kernel: HOME=/ Jan 30 14:10:00.015438 kernel: TERM=linux Jan 30 14:10:00.015443 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:00.015450 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:00.015458 systemd[1]: Detected architecture x86-64. Jan 30 14:10:00.015464 systemd[1]: Running in initrd. Jan 30 14:10:00.015470 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:00.015476 systemd[1]: Hostname set to . Jan 30 14:10:00.015482 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:00.015488 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:00.015494 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:00.015500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:00.015507 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:00.015513 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:00.015519 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:00.015525 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:00.015531 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:00.015538 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:00.015543 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 30 14:10:00.015550 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 30 14:10:00.015556 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:00.015562 kernel: clocksource: Switched to clocksource tsc Jan 30 14:10:00.015568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:00.015574 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:00.015580 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:00.015586 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:00.015592 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:00.015598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:00.015605 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:00.015611 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:00.015617 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:00.015622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:00.015628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:00.015634 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:00.015640 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:00.015646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:00.015653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:00.015661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:00.015667 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:00.015673 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:00.015709 systemd-journald[265]: Collecting audit messages is disabled. Jan 30 14:10:00.015737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:00.015744 systemd-journald[265]: Journal started Jan 30 14:10:00.015757 systemd-journald[265]: Runtime Journal (/run/log/journal/2fa9812467b54217b4034bb6faf272d6) is 8.0M, max 639.9M, 631.9M free. Jan 30 14:10:00.039022 systemd-modules-load[267]: Inserted module 'overlay' Jan 30 14:10:00.061662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:00.091643 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:00.173899 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:00.173916 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:00.173926 kernel: Bridge firewalling registered Jan 30 14:10:00.153504 systemd-modules-load[267]: Inserted module 'br_netfilter' Jan 30 14:10:00.164121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:00.184930 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:00.210962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:00.236083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:00.266085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:00.278554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:00.280182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:00.281865 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:00.287334 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:00.287965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:00.288074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:00.288722 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:00.289454 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:00.292878 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:00.303883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:00.307355 systemd-resolved[301]: Positive Trust Anchors: Jan 30 14:10:00.307360 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:00.307386 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:00.309076 systemd-resolved[301]: Defaulting to hostname 'linux'. Jan 30 14:10:00.315974 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:00.347133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:00.375894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:00.479911 dracut-cmdline[306]: dracut-dracut-053 Jan 30 14:10:00.487962 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:10:00.690692 kernel: SCSI subsystem initialized Jan 30 14:10:00.713702 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:00.735665 kernel: iscsi: registered transport (tcp) Jan 30 14:10:00.767756 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:00.767775 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:00.800876 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:00.821978 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:00.903112 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:00.903140 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:00.922977 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:00.980731 kernel: raid6: avx2x4 gen() 51399 MB/s Jan 30 14:10:01.012692 kernel: raid6: avx2x2 gen() 51427 MB/s Jan 30 14:10:01.049641 kernel: raid6: avx2x1 gen() 43333 MB/s Jan 30 14:10:01.049658 kernel: raid6: using algorithm avx2x2 gen() 51427 MB/s Jan 30 14:10:01.097309 kernel: raid6: .... xor() 30096 MB/s, rmw enabled Jan 30 14:10:01.097329 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:10:01.138702 kernel: xor: automatically using best checksumming function avx Jan 30 14:10:01.254696 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:01.260978 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:01.288915 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:01.296158 systemd-udevd[492]: Using default interface naming scheme 'v255'. Jan 30 14:10:01.299885 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:01.333895 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:01.364588 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jan 30 14:10:01.381301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:01.410031 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:01.498525 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:01.532668 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:10:01.532725 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:10:01.558681 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:10:01.583039 kernel: ACPI: bus type USB registered Jan 30 14:10:01.583058 kernel: usbcore: registered new interface driver usbfs Jan 30 14:10:01.598208 kernel: usbcore: registered new interface driver hub Jan 30 14:10:01.612887 kernel: usbcore: registered new device driver usb Jan 30 14:10:01.626869 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:01.637798 kernel: PTP clock support registered Jan 30 14:10:01.637877 kernel: libata version 3.00 loaded. Jan 30 14:10:01.659484 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:10:01.659521 kernel: AES CTR mode by8 optimization enabled Jan 30 14:10:01.664043 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:01.697095 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:10:01.816754 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 14:10:01.816845 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 14:10:01.978475 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 14:10:01.978550 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 14:10:01.978616 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:10:01.978683 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 14:10:01.978745 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 14:10:01.978808 kernel: scsi host0: ahci Jan 30 14:10:01.978872 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 14:10:01.978933 kernel: scsi host1: ahci Jan 30 14:10:01.978992 kernel: hub 1-0:1.0: USB hub found Jan 30 14:10:01.979062 kernel: scsi host2: ahci Jan 30 14:10:01.979120 kernel: hub 1-0:1.0: 16 ports detected Jan 30 14:10:01.979187 kernel: scsi host3: ahci Jan 30 14:10:01.979250 kernel: hub 2-0:1.0: USB hub found Jan 30 14:10:01.979318 kernel: scsi host4: ahci Jan 30 14:10:01.979375 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 14:10:01.979384 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 14:10:01.979391 kernel: hub 2-0:1.0: 10 ports detected Jan 30 14:10:01.979455 kernel: scsi host5: ahci Jan 30 14:10:01.979515 kernel: pps pps0: new PPS source ptp0 Jan 30 14:10:01.979576 kernel: scsi host6: ahci Jan 30 14:10:01.979636 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 14:10:01.979715 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 14:10:01.979724 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:10:01.979785 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 14:10:01.979794 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:44 Jan 30 14:10:01.979856 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 14:10:01.979864 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 14:10:01.979926 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 14:10:01.979934 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:10:01.979994 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 14:10:01.980002 kernel: pps pps1: new PPS source ptp1 Jan 30 14:10:01.980059 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 14:10:01.980067 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 14:10:02.292944 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 14:10:02.292957 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:10:02.293041 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 14:10:02.293164 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:45 Jan 30 14:10:02.293242 kernel: hub 1-14:1.0: USB hub found Jan 30 14:10:02.293328 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 14:10:02.293403 kernel: hub 1-14:1.0: 4 ports detected Jan 30 14:10:02.293483 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:10:02.293557 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:10:01.704736 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:02.475780 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475794 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475804 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475812 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475819 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:10:02.475826 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475833 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:10:02.475840 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:10:01.757708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:02.607541 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:10:02.607555 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 Jan 30 14:10:03.017422 kernel: ata2.00: Features: NCQ-prio Jan 30 14:10:03.017433 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 14:10:03.017551 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:10:03.017623 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:10:03.017632 kernel: ata1.00: Features: NCQ-prio Jan 30 14:10:03.017643 kernel: ata2.00: configured for UDMA/133 Jan 30 14:10:03.017651 kernel: ata1.00: configured for UDMA/133 Jan 30 14:10:03.017662 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:10:03.314573 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:03.314585 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:10:03.314666 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 14:10:03.314796 kernel: usbcore: registered new interface driver usbhid Jan 30 14:10:03.314812 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 14:10:03.314918 kernel: usbhid: USB HID core driver Jan 30 14:10:03.314931 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 14:10:03.314944 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:10:03.315041 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 14:10:03.315164 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 14:10:03.315260 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 14:10:03.315269 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:10:03.315276 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 14:10:03.315347 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.315355 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:10:03.315443 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:10:03.315505 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 14:10:03.315569 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 14:10:03.315633 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:10:03.315704 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 14:10:03.315769 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 14:10:03.315832 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 14:10:03.315909 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:10:03.315968 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:10:03.316026 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 14:10:03.316083 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.316092 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:10:03.316101 kernel: GPT:9289727 != 937703087 Jan 30 14:10:03.316108 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:10:03.316116 kernel: GPT:9289727 != 937703087 Jan 30 14:10:03.316122 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:10:03.316129 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:03.316136 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 14:10:03.316196 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:10:03.316260 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 14:10:03.316322 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 Jan 30 14:10:03.625918 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:10:03.625929 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:10:03.626004 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 14:10:03.626073 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:10:03.626136 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (562) Jan 30 14:10:03.626145 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 14:10:03.626205 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (578) Jan 30 14:10:03.626213 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:10:03.626277 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.626285 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:02.449049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:02.528707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:03.680764 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.680776 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:02.528759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:02.670269 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:03.730713 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.730733 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 14:10:03.730845 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:02.736741 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:03.770796 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 14:10:02.766770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:03.771020 disk-uuid[707]: Primary Header is updated. Jan 30 14:10:03.771020 disk-uuid[707]: Secondary Entries is updated. Jan 30 14:10:03.771020 disk-uuid[707]: Secondary Header is updated. Jan 30 14:10:02.766804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:02.821787 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:02.891189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:03.460904 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:03.476310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:03.497878 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 30 14:10:03.519032 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 30 14:10:03.533117 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:10:03.543837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:10:03.559675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 14:10:03.578899 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:03.604154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:03.785181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:04.703983 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:04.723668 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:04.723728 disk-uuid[708]: The operation has completed successfully. Jan 30 14:10:04.755993 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:04.756041 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:04.790962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:04.828784 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:10:04.828850 sh[739]: Success Jan 30 14:10:04.859427 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:04.886790 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:04.894976 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:04.970719 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:10:04.970745 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:04.991572 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:05.009743 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:05.026829 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:05.062699 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:10:05.064771 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:05.073198 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:05.083836 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:05.205794 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:05.205809 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:05.205816 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:10:05.205823 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:10:05.205830 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:10:05.205837 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:05.211828 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:05.213195 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:05.254839 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:05.266006 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:05.303854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:05.314896 systemd-networkd[923]: lo: Link UP Jan 30 14:10:05.314583 ignition[889]: Ignition 2.19.0 Jan 30 14:10:05.314898 systemd-networkd[923]: lo: Gained carrier Jan 30 14:10:05.314588 ignition[889]: Stage: fetch-offline Jan 30 14:10:05.316617 unknown[889]: fetched base config from "system" Jan 30 14:10:05.314610 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:05.316621 unknown[889]: fetched user config from "system" Jan 30 14:10:05.314615 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:05.317278 systemd-networkd[923]: Enumeration completed Jan 30 14:10:05.314670 ignition[889]: parsed url from cmdline: "" Jan 30 14:10:05.317354 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:05.314672 ignition[889]: no config URL provided Jan 30 14:10:05.318101 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.314675 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:05.336103 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:05.314697 ignition[889]: parsing config with SHA512: 048cf34dd90582b20c441ddf6f4475a9523bb969538a1b52ad174b1abda4b67236bdd94decd8e85e4bf3d842b2097762e652b1ec1f48847de2b359cf0a512573 Jan 30 14:10:05.345714 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.316841 ignition[889]: fetch-offline: fetch-offline passed Jan 30 14:10:05.355257 systemd[1]: Reached target network.target - Network. Jan 30 14:10:05.316844 ignition[889]: POST message to Packet Timeline Jan 30 14:10:05.369832 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 14:10:05.316846 ignition[889]: POST Status error: resource requires networking Jan 30 14:10:05.373883 systemd-networkd[923]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.316881 ignition[889]: Ignition finished successfully Jan 30 14:10:05.376830 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:05.586956 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 14:10:05.386999 ignition[936]: Ignition 2.19.0 Jan 30 14:10:05.578200 systemd-networkd[923]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.387007 ignition[936]: Stage: kargs Jan 30 14:10:05.387180 ignition[936]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:05.387192 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:05.388064 ignition[936]: kargs: kargs passed Jan 30 14:10:05.388069 ignition[936]: POST message to Packet Timeline Jan 30 14:10:05.388083 ignition[936]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:05.388725 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43432->[::1]:53: read: connection refused Jan 30 14:10:05.589828 ignition[936]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 14:10:05.590991 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58689->[::1]:53: read: connection refused Jan 30 14:10:05.818698 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 14:10:05.820069 systemd-networkd[923]: eno1: Link UP Jan 30 14:10:05.820194 systemd-networkd[923]: eno2: Link UP Jan 30 14:10:05.820314 systemd-networkd[923]: enp1s0f0np0: Link UP Jan 30 14:10:05.820453 systemd-networkd[923]: enp1s0f0np0: Gained carrier Jan 30 14:10:05.830891 systemd-networkd[923]: enp1s0f1np1: Link UP Jan 30 14:10:05.862836 systemd-networkd[923]: enp1s0f0np0: DHCPv4 address 139.178.70.199/31, gateway 139.178.70.198 acquired from 145.40.83.140 Jan 30 14:10:05.991535 ignition[936]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 14:10:05.992629 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42378->[::1]:53: read: connection refused Jan 30 14:10:06.620474 systemd-networkd[923]: enp1s0f1np1: Gained carrier Jan 30 14:10:06.793177 ignition[936]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 14:10:06.794330 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43137->[::1]:53: read: connection refused Jan 30 14:10:06.876294 systemd-networkd[923]: enp1s0f0np0: Gained IPv6LL Jan 30 14:10:07.836288 systemd-networkd[923]: enp1s0f1np1: Gained IPv6LL Jan 30 14:10:08.396045 ignition[936]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 14:10:08.397228 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52340->[::1]:53: read: connection refused Jan 30 14:10:11.600710 ignition[936]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 14:10:12.707158 ignition[936]: GET result: OK Jan 30 14:10:13.029797 ignition[936]: Ignition finished successfully Jan 30 14:10:13.034648 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:13.059072 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:13.121270 ignition[954]: Ignition 2.19.0 Jan 30 14:10:13.121288 ignition[954]: Stage: disks Jan 30 14:10:13.121634 ignition[954]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:13.121657 ignition[954]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:13.123513 ignition[954]: disks: disks passed Jan 30 14:10:13.123522 ignition[954]: POST message to Packet Timeline Jan 30 14:10:13.123550 ignition[954]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:14.788150 ignition[954]: GET result: OK Jan 30 14:10:15.211068 ignition[954]: Ignition finished successfully Jan 30 14:10:15.214270 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:15.228990 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:15.247038 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:15.268037 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:15.289131 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:15.309078 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:15.337921 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:15.372814 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:10:15.384297 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:15.394882 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:15.517666 kernel: EXT4-fs (sdb9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:15.517730 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:15.518065 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:15.554876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:15.563376 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:15.679379 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (984) Jan 30 14:10:15.679420 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:15.679437 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:15.679445 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:10:15.679452 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:10:15.679459 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:10:15.584681 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:15.696148 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 14:10:15.707877 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:15.707895 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:15.780915 coreos-metadata[986]: Jan 30 14:10:15.748 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:10:15.714840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:15.814765 coreos-metadata[1002]: Jan 30 14:10:15.748 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:10:15.744813 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:15.784978 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:15.847771 initrd-setup-root[1017]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:15.857728 initrd-setup-root[1024]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:15.867883 initrd-setup-root[1031]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:15.866860 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:15.888922 initrd-setup-root[1038]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:15.894883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:15.951867 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:15.936511 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:15.961495 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:15.983978 ignition[1105]: INFO : Ignition 2.19.0 Jan 30 14:10:15.983978 ignition[1105]: INFO : Stage: mount Jan 30 14:10:15.997893 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:15.997893 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:15.997893 ignition[1105]: INFO : mount: mount passed Jan 30 14:10:15.997893 ignition[1105]: INFO : POST message to Packet Timeline Jan 30 14:10:15.997893 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:15.991105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:17.008172 coreos-metadata[986]: Jan 30 14:10:17.008 INFO Fetch successful Jan 30 14:10:17.083762 coreos-metadata[986]: Jan 30 14:10:17.083 INFO wrote hostname ci-4081.3.0-a-feecaa3039 to /sysroot/etc/hostname Jan 30 14:10:17.085043 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:17.119961 ignition[1105]: INFO : GET result: OK Jan 30 14:10:17.451752 ignition[1105]: INFO : Ignition finished successfully Jan 30 14:10:17.454413 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:17.647627 coreos-metadata[1002]: Jan 30 14:10:17.647 INFO Fetch successful Jan 30 14:10:17.684845 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 14:10:17.684900 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 14:10:17.710836 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:17.737094 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:17.781669 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1131) Jan 30 14:10:17.811169 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:17.811185 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:17.828997 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:10:17.867254 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:10:17.867276 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:10:17.880257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:17.911103 ignition[1148]: INFO : Ignition 2.19.0 Jan 30 14:10:17.911103 ignition[1148]: INFO : Stage: files Jan 30 14:10:17.925891 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:17.925891 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:17.925891 ignition[1148]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:17.925891 ignition[1148]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 14:10:17.915345 unknown[1148]: wrote ssh authorized keys file for user: core Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.321011 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 14:10:18.503564 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:10:18.782768 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.782768 ignition[1148]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: files passed Jan 30 14:10:18.811983 ignition[1148]: INFO : POST message to Packet Timeline Jan 30 14:10:18.811983 ignition[1148]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:20.337141 ignition[1148]: INFO : GET result: OK Jan 30 14:10:20.673933 ignition[1148]: INFO : Ignition finished successfully Jan 30 14:10:20.675259 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:20.707938 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:20.718241 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:20.728064 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:20.728121 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:20.790141 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:20.790141 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:20.803899 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:20.791406 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:20.828338 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:20.862873 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:20.915064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:20.915151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:20.935080 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:20.955876 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:20.976074 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:20.991771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:21.041688 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:21.073178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:21.123988 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:21.135002 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:21.156999 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:21.175058 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:21.175249 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:21.204415 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:21.225289 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:21.243295 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:21.261281 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:21.283397 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:21.304295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:21.324267 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:21.346322 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:21.367304 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:21.387290 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:21.405168 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:21.405577 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:21.440133 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:21.450307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:21.472274 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:21.472693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:21.495165 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:21.495567 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:21.527269 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:21.527741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:21.548481 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:21.566129 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:21.566592 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:21.587288 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:21.605296 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:21.624362 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:21.624695 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:21.644322 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:21.644624 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:21.667363 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:21.667799 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:21.687366 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:21.687774 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:21.810913 ignition[1212]: INFO : Ignition 2.19.0 Jan 30 14:10:21.810913 ignition[1212]: INFO : Stage: umount Jan 30 14:10:21.810913 ignition[1212]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.810913 ignition[1212]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:21.810913 ignition[1212]: INFO : umount: umount passed Jan 30 14:10:21.810913 ignition[1212]: INFO : POST message to Packet Timeline Jan 30 14:10:21.810913 ignition[1212]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:21.705385 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:21.705810 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:21.735830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:21.741547 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:21.772934 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:21.773392 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:21.801938 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:21.802012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:21.847841 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:21.848506 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:21.848600 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:21.858802 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:21.858935 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:22.195426 ignition[1212]: INFO : GET result: OK Jan 30 14:10:22.528887 ignition[1212]: INFO : Ignition finished successfully Jan 30 14:10:22.531576 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:22.531927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:22.548949 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:22.563916 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:22.564130 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:22.582021 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:22.582170 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:22.600103 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:22.600265 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:22.618091 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:22.618266 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:22.636072 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:22.636262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:22.655490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:22.665806 systemd-networkd[923]: enp1s0f0np0: DHCPv6 lease lost Jan 30 14:10:22.673209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:22.674857 systemd-networkd[923]: enp1s0f1np1: DHCPv6 lease lost Jan 30 14:10:22.693772 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:22.694053 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:22.714071 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:22.714422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:22.735371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:22.735499 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:22.773826 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:22.781801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:22.781837 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:22.801932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:22.802017 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:22.819961 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:22.820068 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:22.841076 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:22.841247 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:22.860328 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:22.879993 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:22.880406 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:22.917215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:22.917260 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:22.917965 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:22.917996 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:22.952926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:22.953011 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:22.983300 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:22.983517 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:23.012176 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:23.012386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:23.058080 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:23.063016 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:23.063212 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:23.094041 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:10:23.094204 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:23.116043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:23.116192 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:23.135004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:23.392846 systemd-journald[265]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:23.135153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:23.158145 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:23.158436 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:23.217905 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:23.218167 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:23.237940 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:23.271999 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:23.323739 systemd[1]: Switching root. Jan 30 14:10:23.465858 systemd-journald[265]: Journal stopped Jan 30 14:10:00.002317 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 30 14:10:00.002332 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:10:00.002339 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:10:00.002344 kernel: BIOS-provided physical RAM map: Jan 30 14:10:00.002348 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 14:10:00.002352 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 14:10:00.002357 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 14:10:00.002361 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 14:10:00.002365 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 14:10:00.002369 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cbfff] usable Jan 30 14:10:00.002373 kernel: BIOS-e820: [mem 0x00000000819cc000-0x00000000819ccfff] ACPI NVS Jan 30 14:10:00.002378 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] reserved Jan 30 14:10:00.002382 kernel: BIOS-e820: [mem 0x00000000819ce000-0x000000008afccfff] usable Jan 30 14:10:00.002386 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 14:10:00.002391 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 14:10:00.002396 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 14:10:00.002402 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 14:10:00.002406 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 14:10:00.002411 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 14:10:00.002415 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 14:10:00.002420 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 14:10:00.002424 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 14:10:00.002429 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 14:10:00.002433 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 14:10:00.002438 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 14:10:00.002443 kernel: NX (Execute Disable) protection: active Jan 30 14:10:00.002447 kernel: APIC: Static calls initialized Jan 30 14:10:00.002452 kernel: SMBIOS 3.2.1 present. Jan 30 14:10:00.002457 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 14:10:00.002462 kernel: tsc: Detected 3400.000 MHz processor Jan 30 14:10:00.002466 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 14:10:00.002471 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:10:00.002476 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:10:00.002481 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 14:10:00.002486 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 14:10:00.002491 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:10:00.002496 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 14:10:00.002501 kernel: Using GB pages for direct mapping Jan 30 14:10:00.002506 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:00.002511 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 14:10:00.002517 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 14:10:00.002522 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 14:10:00.002527 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 14:10:00.002533 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 14:10:00.002538 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 14:10:00.002543 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 14:10:00.002548 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 14:10:00.002553 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 14:10:00.002558 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 14:10:00.002563 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 14:10:00.002568 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 14:10:00.002574 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 14:10:00.002579 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002584 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 14:10:00.002589 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 14:10:00.002594 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002599 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002604 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 14:10:00.002609 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 14:10:00.002614 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002620 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:10:00.002625 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 14:10:00.002630 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:10:00.002635 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 14:10:00.002640 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 14:10:00.002645 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 14:10:00.002650 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 14:10:00.002655 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 14:10:00.002664 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 14:10:00.002669 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 14:10:00.002674 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 14:10:00.002679 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 14:10:00.002684 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 14:10:00.002689 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 14:10:00.002694 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 14:10:00.002699 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 14:10:00.002704 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 14:10:00.002710 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 14:10:00.002715 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 14:10:00.002720 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 14:10:00.002725 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 14:10:00.002730 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 14:10:00.002735 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 14:10:00.002740 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 14:10:00.002745 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 14:10:00.002749 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 14:10:00.002755 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 14:10:00.002760 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 14:10:00.002765 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 14:10:00.002770 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 14:10:00.002775 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 14:10:00.002780 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 14:10:00.002785 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 14:10:00.002790 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 14:10:00.002794 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 14:10:00.002799 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 14:10:00.002805 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 14:10:00.002810 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 14:10:00.002815 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 14:10:00.002820 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 14:10:00.002825 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 14:10:00.002830 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 14:10:00.002835 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 14:10:00.002840 kernel: No NUMA configuration found Jan 30 14:10:00.002845 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 14:10:00.002851 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 14:10:00.002856 kernel: Zone ranges: Jan 30 14:10:00.002861 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:10:00.002866 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 14:10:00.002871 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:10:00.002876 kernel: Movable zone start for each node Jan 30 14:10:00.002881 kernel: Early memory node ranges Jan 30 14:10:00.002886 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 14:10:00.002891 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 14:10:00.002897 kernel: node 0: [mem 0x0000000040400000-0x00000000819cbfff] Jan 30 14:10:00.002902 kernel: node 0: [mem 0x00000000819ce000-0x000000008afccfff] Jan 30 14:10:00.002907 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 14:10:00.002912 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 14:10:00.002921 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:10:00.002926 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 14:10:00.002932 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:10:00.002937 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 14:10:00.002943 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 14:10:00.002948 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 14:10:00.002954 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 14:10:00.002959 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 14:10:00.002965 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 14:10:00.002970 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 14:10:00.002975 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 14:10:00.002981 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 14:10:00.002986 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 14:10:00.002992 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 14:10:00.002997 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 14:10:00.003003 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 14:10:00.003008 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 14:10:00.003013 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 14:10:00.003018 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 14:10:00.003024 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 14:10:00.003029 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 14:10:00.003034 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 14:10:00.003039 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 14:10:00.003046 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 14:10:00.003051 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 14:10:00.003056 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 14:10:00.003061 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 14:10:00.003067 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 14:10:00.003072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:10:00.003077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:10:00.003083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:10:00.003088 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 14:10:00.003094 kernel: TSC deadline timer available Jan 30 14:10:00.003099 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 14:10:00.003105 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 14:10:00.003110 kernel: Booting paravirtualized kernel on bare hardware Jan 30 14:10:00.003116 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:10:00.003121 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 14:10:00.003126 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 14:10:00.003132 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 14:10:00.003137 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 14:10:00.003144 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:10:00.003149 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:00.003154 kernel: random: crng init done Jan 30 14:10:00.003160 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 14:10:00.003165 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 14:10:00.003170 kernel: Fallback order for Node 0: 0 Jan 30 14:10:00.003176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 14:10:00.003182 kernel: Policy zone: Normal Jan 30 14:10:00.003187 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:00.003193 kernel: software IO TLB: area num 16. Jan 30 14:10:00.003198 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 732416K reserved, 0K cma-reserved) Jan 30 14:10:00.003204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 14:10:00.003209 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:10:00.003215 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:10:00.003220 kernel: Dynamic Preempt: voluntary Jan 30 14:10:00.003225 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:00.003232 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:00.003237 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 14:10:00.003243 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:00.003248 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:10:00.003253 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:00.003259 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:00.003264 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 14:10:00.003269 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 14:10:00.003275 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:00.003280 kernel: Console: colour dummy device 80x25 Jan 30 14:10:00.003286 kernel: printk: console [tty0] enabled Jan 30 14:10:00.003291 kernel: printk: console [ttyS1] enabled Jan 30 14:10:00.003297 kernel: ACPI: Core revision 20230628 Jan 30 14:10:00.003302 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 14:10:00.003307 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:10:00.003313 kernel: DMAR: Host address width 39 Jan 30 14:10:00.003318 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 14:10:00.003323 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 14:10:00.003329 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 14:10:00.003335 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 14:10:00.003340 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 14:10:00.003346 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 14:10:00.003351 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 14:10:00.003356 kernel: x2apic enabled Jan 30 14:10:00.003362 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 14:10:00.003367 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 14:10:00.003373 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 14:10:00.003378 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 14:10:00.003384 kernel: process: using mwait in idle threads Jan 30 14:10:00.003389 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 14:10:00.003395 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 14:10:00.003400 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:10:00.003405 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 14:10:00.003410 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 14:10:00.003415 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 14:10:00.003421 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:10:00.003426 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 14:10:00.003431 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 14:10:00.003436 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:10:00.003443 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:10:00.003448 kernel: TAA: Mitigation: TSX disabled Jan 30 14:10:00.003453 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 14:10:00.003458 kernel: SRBDS: Mitigation: Microcode Jan 30 14:10:00.003464 kernel: GDS: Mitigation: Microcode Jan 30 14:10:00.003469 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:10:00.003474 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:10:00.003479 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:10:00.003485 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 14:10:00.003490 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 14:10:00.003495 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:10:00.003501 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 14:10:00.003507 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 14:10:00.003512 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 14:10:00.003517 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:10:00.003523 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:00.003528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:00.003533 kernel: landlock: Up and running. Jan 30 14:10:00.003538 kernel: SELinux: Initializing. Jan 30 14:10:00.003544 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.003549 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.003554 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 14:10:00.003561 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:10:00.003566 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:10:00.003571 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:10:00.003577 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 14:10:00.003582 kernel: ... version: 4 Jan 30 14:10:00.003587 kernel: ... bit width: 48 Jan 30 14:10:00.003593 kernel: ... generic registers: 4 Jan 30 14:10:00.003598 kernel: ... value mask: 0000ffffffffffff Jan 30 14:10:00.003603 kernel: ... max period: 00007fffffffffff Jan 30 14:10:00.003610 kernel: ... fixed-purpose events: 3 Jan 30 14:10:00.003615 kernel: ... event mask: 000000070000000f Jan 30 14:10:00.003620 kernel: signal: max sigframe size: 2032 Jan 30 14:10:00.003626 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 14:10:00.003631 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:00.003636 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:00.003642 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 14:10:00.003647 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:00.003652 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:10:00.003664 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 14:10:00.003670 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 14:10:00.003675 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 14:10:00.003681 kernel: smpboot: Max logical packages: 1 Jan 30 14:10:00.003686 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 14:10:00.003692 kernel: devtmpfs: initialized Jan 30 14:10:00.003718 kernel: x86/mm: Memory block size: 128MB Jan 30 14:10:00.003724 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cc000-0x819ccfff] (4096 bytes) Jan 30 14:10:00.003746 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 14:10:00.003752 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:00.003757 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 14:10:00.003763 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:00.003768 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:00.003773 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:00.003779 kernel: audit: type=2000 audit(1738246194.040:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:00.003784 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:00.003789 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:10:00.003794 kernel: cpuidle: using governor menu Jan 30 14:10:00.003801 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:00.003806 kernel: dca service started, version 1.12.1 Jan 30 14:10:00.003811 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 14:10:00.003817 kernel: PCI: Using configuration type 1 for base access Jan 30 14:10:00.003822 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 14:10:00.003827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:10:00.003833 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:00.003838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:00.003843 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:00.003849 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:00.003855 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:00.003860 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:00.003865 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:00.003870 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:00.003876 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 14:10:00.003881 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003886 kernel: ACPI: SSDT 0xFFFF972C81604800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 14:10:00.003892 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003898 kernel: ACPI: SSDT 0xFFFF972C815FB800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 14:10:00.003903 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003909 kernel: ACPI: SSDT 0xFFFF972C815E4800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 14:10:00.003914 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003919 kernel: ACPI: SSDT 0xFFFF972C815F8000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 14:10:00.003924 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003930 kernel: ACPI: SSDT 0xFFFF972C8160E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 14:10:00.003935 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:10:00.003940 kernel: ACPI: SSDT 0xFFFF972C80EEC400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 14:10:00.003946 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 14:10:00.003952 kernel: ACPI: Interpreter enabled Jan 30 14:10:00.003957 kernel: ACPI: PM: (supports S0 S5) Jan 30 14:10:00.003962 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:10:00.003968 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 14:10:00.003973 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 14:10:00.003978 kernel: HEST: Table parsing has been initialized. Jan 30 14:10:00.003983 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 14:10:00.003989 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:10:00.003995 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:10:00.004000 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 14:10:00.004006 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 14:10:00.004011 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 14:10:00.004017 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 14:10:00.004022 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 14:10:00.004027 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 14:10:00.004032 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 14:10:00.004038 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 14:10:00.004044 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 14:10:00.004049 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 14:10:00.004055 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 14:10:00.004060 kernel: ACPI: \PIN_: New power resource Jan 30 14:10:00.004065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 14:10:00.004136 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:10:00.004189 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 14:10:00.004235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 14:10:00.004245 kernel: PCI host bridge to bus 0000:00 Jan 30 14:10:00.004295 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:10:00.004338 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:10:00.004380 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:10:00.004421 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 14:10:00.004462 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 14:10:00.004502 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 14:10:00.004560 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 14:10:00.004615 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 14:10:00.004668 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.004720 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 14:10:00.004769 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 14:10:00.004819 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 14:10:00.004870 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 14:10:00.004920 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 14:10:00.004968 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 14:10:00.005015 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 14:10:00.005066 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 14:10:00.005113 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 14:10:00.005161 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 14:10:00.005211 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 14:10:00.005257 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:10:00.005310 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 14:10:00.005356 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:10:00.005407 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 14:10:00.005454 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 14:10:00.005503 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 14:10:00.005559 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 14:10:00.005608 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 14:10:00.005654 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 14:10:00.005708 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 14:10:00.005754 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 14:10:00.005804 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 14:10:00.005854 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 14:10:00.005901 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 14:10:00.005949 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 14:10:00.005994 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 14:10:00.006044 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 14:10:00.006089 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 14:10:00.006140 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 14:10:00.006186 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 14:10:00.006238 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 14:10:00.006285 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006342 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 14:10:00.006389 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006440 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 14:10:00.006486 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006538 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 14:10:00.006584 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006638 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 14:10:00.006689 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.006739 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 14:10:00.006786 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:10:00.006838 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 14:10:00.006889 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 14:10:00.006937 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 14:10:00.006985 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 14:10:00.007036 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 14:10:00.007084 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 14:10:00.007137 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 14:10:00.007186 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 14:10:00.007236 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 14:10:00.007284 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 14:10:00.007333 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:10:00.007380 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:10:00.007433 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 14:10:00.007481 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 14:10:00.007529 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 14:10:00.007578 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 14:10:00.007627 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:10:00.007706 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:10:00.007755 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:10:00.007802 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:10:00.007849 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:10:00.007895 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:10:00.007949 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 14:10:00.008000 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:10:00.008049 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 14:10:00.008096 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 14:10:00.008145 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 14:10:00.008194 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.008241 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:10:00.008288 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:10:00.008337 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:10:00.008390 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 14:10:00.008438 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:10:00.008487 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 14:10:00.008535 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 14:10:00.008582 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 14:10:00.008631 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:10:00.008684 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:10:00.008732 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:10:00.008779 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:10:00.008827 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:10:00.008879 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 14:10:00.008929 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 14:10:00.008978 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 14:10:00.009026 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:10:00.009077 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:10:00.009125 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.009174 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.009227 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 14:10:00.009283 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 14:10:00.009334 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 14:10:00.009387 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 14:10:00.009438 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 14:10:00.009489 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:10:00.009539 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 14:10:00.009589 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:10:00.009638 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:10:00.009718 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.009767 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.009777 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 14:10:00.009783 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 14:10:00.009789 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 14:10:00.009795 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 14:10:00.009800 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 14:10:00.009806 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 14:10:00.009812 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 14:10:00.009817 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 14:10:00.009823 kernel: iommu: Default domain type: Translated Jan 30 14:10:00.009829 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:10:00.009835 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:10:00.009841 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:10:00.009846 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 14:10:00.009852 kernel: e820: reserve RAM buffer [mem 0x819cc000-0x83ffffff] Jan 30 14:10:00.009858 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 14:10:00.009864 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 14:10:00.009869 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 14:10:00.009875 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 14:10:00.009925 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 14:10:00.009975 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 14:10:00.010025 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:10:00.010033 kernel: vgaarb: loaded Jan 30 14:10:00.010039 kernel: clocksource: Switched to clocksource tsc-early Jan 30 14:10:00.010045 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:00.010051 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:00.010056 kernel: pnp: PnP ACPI init Jan 30 14:10:00.010103 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 14:10:00.010152 kernel: pnp 00:02: [dma 0 disabled] Jan 30 14:10:00.010200 kernel: pnp 00:03: [dma 0 disabled] Jan 30 14:10:00.010248 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 14:10:00.010291 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 14:10:00.010337 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 14:10:00.010382 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 14:10:00.010429 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 14:10:00.010472 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 14:10:00.010515 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 14:10:00.010560 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 14:10:00.010602 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 14:10:00.010646 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 14:10:00.010726 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 14:10:00.010777 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 14:10:00.010820 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 14:10:00.010863 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 14:10:00.010905 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 14:10:00.010947 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 14:10:00.010989 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 14:10:00.011032 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 14:10:00.011079 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 14:10:00.011088 kernel: pnp: PnP ACPI: found 10 devices Jan 30 14:10:00.011094 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:10:00.011100 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:00.011106 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:00.011111 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 14:10:00.011117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:00.011123 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:00.011130 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 14:10:00.011136 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 14:10:00.011142 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.011147 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:10:00.011154 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:00.011160 kernel: NET: Registered PF_XDP protocol family Jan 30 14:10:00.011208 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 14:10:00.011256 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 14:10:00.011306 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 14:10:00.011356 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011404 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011454 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011502 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:10:00.011550 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:10:00.011596 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:10:00.011644 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:10:00.011727 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:10:00.011774 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:10:00.011821 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:10:00.011868 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:10:00.011915 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:10:00.011964 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:10:00.012011 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:10:00.012057 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:10:00.012105 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:10:00.012153 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.012201 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.012247 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:10:00.012295 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:10:00.012341 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:10:00.012387 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 14:10:00.012429 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:10:00.012471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:10:00.012512 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:10:00.012553 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 14:10:00.012594 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 14:10:00.012642 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 14:10:00.012712 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:10:00.012776 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 14:10:00.012820 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 14:10:00.012867 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:10:00.012911 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 14:10:00.012957 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 14:10:00.013003 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:10:00.013049 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 14:10:00.013094 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:10:00.013102 kernel: PCI: CLS 64 bytes, default 64 Jan 30 14:10:00.013108 kernel: DMAR: No ATSR found Jan 30 14:10:00.013114 kernel: DMAR: No SATC found Jan 30 14:10:00.013120 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 14:10:00.013166 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 14:10:00.013216 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 14:10:00.013263 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 14:10:00.013311 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 14:10:00.013357 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 14:10:00.013404 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 14:10:00.013450 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 14:10:00.013497 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 14:10:00.013544 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 14:10:00.013590 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 14:10:00.013639 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 14:10:00.013707 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 14:10:00.013768 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 14:10:00.013814 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 14:10:00.013861 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 14:10:00.013907 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 14:10:00.013954 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 14:10:00.014000 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 14:10:00.014050 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 14:10:00.014096 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 14:10:00.014144 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 14:10:00.014191 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 14:10:00.014240 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 14:10:00.014289 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 14:10:00.014337 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 14:10:00.014386 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 14:10:00.014437 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 14:10:00.014445 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 14:10:00.014451 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:10:00.014457 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 14:10:00.014463 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 14:10:00.014469 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 14:10:00.014474 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 14:10:00.014480 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 14:10:00.014528 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 14:10:00.014539 kernel: Initialise system trusted keyrings Jan 30 14:10:00.014544 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 14:10:00.014550 kernel: Key type asymmetric registered Jan 30 14:10:00.014556 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:00.014561 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:10:00.014567 kernel: io scheduler mq-deadline registered Jan 30 14:10:00.014573 kernel: io scheduler kyber registered Jan 30 14:10:00.014578 kernel: io scheduler bfq registered Jan 30 14:10:00.014625 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 14:10:00.014675 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 14:10:00.014756 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 14:10:00.014804 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 14:10:00.014851 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 14:10:00.014897 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 14:10:00.014949 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 14:10:00.014959 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 14:10:00.014965 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 14:10:00.014971 kernel: pstore: Using crash dump compression: deflate Jan 30 14:10:00.014976 kernel: pstore: Registered erst as persistent store backend Jan 30 14:10:00.014982 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:10:00.014988 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:00.014994 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:10:00.014999 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 14:10:00.015005 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 14:10:00.015055 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 14:10:00.015064 kernel: i8042: PNP: No PS/2 controller found. Jan 30 14:10:00.015107 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 14:10:00.015150 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 14:10:00.015195 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T14:09:58 UTC (1738246198) Jan 30 14:10:00.015238 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 14:10:00.015246 kernel: intel_pstate: Intel P-state driver initializing Jan 30 14:10:00.015252 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 14:10:00.015259 kernel: intel_pstate: HWP enabled Jan 30 14:10:00.015265 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 30 14:10:00.015270 kernel: vesafb: scrolling: redraw Jan 30 14:10:00.015276 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 30 14:10:00.015282 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000e64b71a3, using 768k, total 768k Jan 30 14:10:00.015287 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:10:00.015293 kernel: fb0: VESA VGA frame buffer device Jan 30 14:10:00.015299 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:00.015305 kernel: Segment Routing with IPv6 Jan 30 14:10:00.015311 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:00.015317 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:00.015323 kernel: Key type dns_resolver registered Jan 30 14:10:00.015329 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 14:10:00.015334 kernel: IPI shorthand broadcast: enabled Jan 30 14:10:00.015340 kernel: sched_clock: Marking stable (2540323890, 1385156211)->(4468964782, -543484681) Jan 30 14:10:00.015346 kernel: registered taskstats version 1 Jan 30 14:10:00.015352 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:00.015357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:10:00.015364 kernel: Key type .fscrypt registered Jan 30 14:10:00.015369 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:00.015375 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:00.015380 kernel: ima: No architecture policies found Jan 30 14:10:00.015386 kernel: clk: Disabling unused clocks Jan 30 14:10:00.015392 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:10:00.015398 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:10:00.015403 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:10:00.015410 kernel: Run /init as init process Jan 30 14:10:00.015416 kernel: with arguments: Jan 30 14:10:00.015421 kernel: /init Jan 30 14:10:00.015427 kernel: with environment: Jan 30 14:10:00.015432 kernel: HOME=/ Jan 30 14:10:00.015438 kernel: TERM=linux Jan 30 14:10:00.015443 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:00.015450 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:00.015458 systemd[1]: Detected architecture x86-64. Jan 30 14:10:00.015464 systemd[1]: Running in initrd. Jan 30 14:10:00.015470 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:00.015476 systemd[1]: Hostname set to . Jan 30 14:10:00.015482 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:00.015488 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:00.015494 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:00.015500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:00.015507 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:00.015513 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:00.015519 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:00.015525 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:00.015531 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:00.015538 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:00.015543 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 30 14:10:00.015550 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 30 14:10:00.015556 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:00.015562 kernel: clocksource: Switched to clocksource tsc Jan 30 14:10:00.015568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:00.015574 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:00.015580 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:00.015586 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:00.015592 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:00.015598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:00.015605 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:00.015611 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:00.015617 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:00.015622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:00.015628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:00.015634 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:00.015640 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:00.015646 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:00.015653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:00.015661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:00.015667 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:00.015673 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:00.015709 systemd-journald[265]: Collecting audit messages is disabled. Jan 30 14:10:00.015737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:00.015744 systemd-journald[265]: Journal started Jan 30 14:10:00.015757 systemd-journald[265]: Runtime Journal (/run/log/journal/2fa9812467b54217b4034bb6faf272d6) is 8.0M, max 639.9M, 631.9M free. Jan 30 14:10:00.039022 systemd-modules-load[267]: Inserted module 'overlay' Jan 30 14:10:00.061662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:00.091643 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:00.173899 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:00.173916 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:00.173926 kernel: Bridge firewalling registered Jan 30 14:10:00.153504 systemd-modules-load[267]: Inserted module 'br_netfilter' Jan 30 14:10:00.164121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:00.184930 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:00.210962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:00.236083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:00.266085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:00.278554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:00.280182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:00.281865 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:00.287334 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:00.287965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:00.288074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:00.288722 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:00.289454 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:00.292878 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:00.303883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:00.307355 systemd-resolved[301]: Positive Trust Anchors: Jan 30 14:10:00.307360 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:00.307386 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:00.309076 systemd-resolved[301]: Defaulting to hostname 'linux'. Jan 30 14:10:00.315974 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:00.347133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:00.375894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:00.479911 dracut-cmdline[306]: dracut-dracut-053 Jan 30 14:10:00.487962 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:10:00.690692 kernel: SCSI subsystem initialized Jan 30 14:10:00.713702 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:00.735665 kernel: iscsi: registered transport (tcp) Jan 30 14:10:00.767756 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:00.767775 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:00.800876 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:00.821978 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:00.903112 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:00.903140 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:00.922977 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:00.980731 kernel: raid6: avx2x4 gen() 51399 MB/s Jan 30 14:10:01.012692 kernel: raid6: avx2x2 gen() 51427 MB/s Jan 30 14:10:01.049641 kernel: raid6: avx2x1 gen() 43333 MB/s Jan 30 14:10:01.049658 kernel: raid6: using algorithm avx2x2 gen() 51427 MB/s Jan 30 14:10:01.097309 kernel: raid6: .... xor() 30096 MB/s, rmw enabled Jan 30 14:10:01.097329 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:10:01.138702 kernel: xor: automatically using best checksumming function avx Jan 30 14:10:01.254696 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:01.260978 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:01.288915 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:01.296158 systemd-udevd[492]: Using default interface naming scheme 'v255'. Jan 30 14:10:01.299885 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:01.333895 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:01.364588 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jan 30 14:10:01.381301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:01.410031 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:01.498525 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:01.532668 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:10:01.532725 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:10:01.558681 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:10:01.583039 kernel: ACPI: bus type USB registered Jan 30 14:10:01.583058 kernel: usbcore: registered new interface driver usbfs Jan 30 14:10:01.598208 kernel: usbcore: registered new interface driver hub Jan 30 14:10:01.612887 kernel: usbcore: registered new device driver usb Jan 30 14:10:01.626869 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:01.637798 kernel: PTP clock support registered Jan 30 14:10:01.637877 kernel: libata version 3.00 loaded. Jan 30 14:10:01.659484 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:10:01.659521 kernel: AES CTR mode by8 optimization enabled Jan 30 14:10:01.664043 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:01.697095 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:10:01.816754 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 14:10:01.816845 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 14:10:01.978475 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 14:10:01.978550 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 14:10:01.978616 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:10:01.978683 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 14:10:01.978745 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 14:10:01.978808 kernel: scsi host0: ahci Jan 30 14:10:01.978872 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 14:10:01.978933 kernel: scsi host1: ahci Jan 30 14:10:01.978992 kernel: hub 1-0:1.0: USB hub found Jan 30 14:10:01.979062 kernel: scsi host2: ahci Jan 30 14:10:01.979120 kernel: hub 1-0:1.0: 16 ports detected Jan 30 14:10:01.979187 kernel: scsi host3: ahci Jan 30 14:10:01.979250 kernel: hub 2-0:1.0: USB hub found Jan 30 14:10:01.979318 kernel: scsi host4: ahci Jan 30 14:10:01.979375 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 14:10:01.979384 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 14:10:01.979391 kernel: hub 2-0:1.0: 10 ports detected Jan 30 14:10:01.979455 kernel: scsi host5: ahci Jan 30 14:10:01.979515 kernel: pps pps0: new PPS source ptp0 Jan 30 14:10:01.979576 kernel: scsi host6: ahci Jan 30 14:10:01.979636 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 14:10:01.979715 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 14:10:01.979724 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:10:01.979785 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 14:10:01.979794 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:44 Jan 30 14:10:01.979856 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 14:10:01.979864 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 14:10:01.979926 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 14:10:01.979934 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:10:01.979994 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 14:10:01.980002 kernel: pps pps1: new PPS source ptp1 Jan 30 14:10:01.980059 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 14:10:01.980067 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 14:10:02.292944 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 14:10:02.292957 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:10:02.293041 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 14:10:02.293164 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:45 Jan 30 14:10:02.293242 kernel: hub 1-14:1.0: USB hub found Jan 30 14:10:02.293328 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 14:10:02.293403 kernel: hub 1-14:1.0: 4 ports detected Jan 30 14:10:02.293483 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:10:02.293557 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:10:01.704736 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:02.475780 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475794 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475804 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475812 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475819 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:10:02.475826 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 14:10:02.475833 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:10:02.475840 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:10:01.757708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:02.607541 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:10:02.607555 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 Jan 30 14:10:03.017422 kernel: ata2.00: Features: NCQ-prio Jan 30 14:10:03.017433 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 14:10:03.017551 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:10:03.017623 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:10:03.017632 kernel: ata1.00: Features: NCQ-prio Jan 30 14:10:03.017643 kernel: ata2.00: configured for UDMA/133 Jan 30 14:10:03.017651 kernel: ata1.00: configured for UDMA/133 Jan 30 14:10:03.017662 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:10:03.314573 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:03.314585 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:10:03.314666 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 14:10:03.314796 kernel: usbcore: registered new interface driver usbhid Jan 30 14:10:03.314812 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 14:10:03.314918 kernel: usbhid: USB HID core driver Jan 30 14:10:03.314931 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 14:10:03.314944 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:10:03.315041 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 14:10:03.315164 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 14:10:03.315260 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 14:10:03.315269 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:10:03.315276 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 14:10:03.315347 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.315355 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:10:03.315443 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:10:03.315505 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 14:10:03.315569 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 14:10:03.315633 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:10:03.315704 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 14:10:03.315769 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 14:10:03.315832 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 14:10:03.315909 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:10:03.315968 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:10:03.316026 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 14:10:03.316083 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.316092 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:10:03.316101 kernel: GPT:9289727 != 937703087 Jan 30 14:10:03.316108 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:10:03.316116 kernel: GPT:9289727 != 937703087 Jan 30 14:10:03.316122 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:10:03.316129 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:03.316136 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 14:10:03.316196 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:10:03.316260 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 14:10:03.316322 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 Jan 30 14:10:03.625918 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:10:03.625929 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:10:03.626004 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 14:10:03.626073 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:10:03.626136 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (562) Jan 30 14:10:03.626145 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 14:10:03.626205 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (578) Jan 30 14:10:03.626213 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:10:03.626277 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.626285 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:02.449049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:02.528707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:03.680764 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.680776 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:02.528759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:02.670269 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:03.730713 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:03.730733 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 14:10:03.730845 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:02.736741 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:03.770796 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 14:10:02.766770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:03.771020 disk-uuid[707]: Primary Header is updated. Jan 30 14:10:03.771020 disk-uuid[707]: Secondary Entries is updated. Jan 30 14:10:03.771020 disk-uuid[707]: Secondary Header is updated. Jan 30 14:10:02.766804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:02.821787 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:02.891189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:03.460904 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:03.476310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:03.497878 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 30 14:10:03.519032 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 30 14:10:03.533117 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:10:03.543837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:10:03.559675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 14:10:03.578899 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:03.604154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:03.785181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:04.703983 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:10:04.723668 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:10:04.723728 disk-uuid[708]: The operation has completed successfully. Jan 30 14:10:04.755993 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:04.756041 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:04.790962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:04.828784 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:10:04.828850 sh[739]: Success Jan 30 14:10:04.859427 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:04.886790 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:04.894976 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:04.970719 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:10:04.970745 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:04.991572 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:05.009743 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:05.026829 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:05.062699 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:10:05.064771 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:05.073198 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:05.083836 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:05.205794 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:05.205809 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:05.205816 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:10:05.205823 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:10:05.205830 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:10:05.205837 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:05.211828 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:05.213195 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:05.254839 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:05.266006 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:05.303854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:05.314896 systemd-networkd[923]: lo: Link UP Jan 30 14:10:05.314583 ignition[889]: Ignition 2.19.0 Jan 30 14:10:05.314898 systemd-networkd[923]: lo: Gained carrier Jan 30 14:10:05.314588 ignition[889]: Stage: fetch-offline Jan 30 14:10:05.316617 unknown[889]: fetched base config from "system" Jan 30 14:10:05.314610 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:05.316621 unknown[889]: fetched user config from "system" Jan 30 14:10:05.314615 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:05.317278 systemd-networkd[923]: Enumeration completed Jan 30 14:10:05.314670 ignition[889]: parsed url from cmdline: "" Jan 30 14:10:05.317354 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:05.314672 ignition[889]: no config URL provided Jan 30 14:10:05.318101 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.314675 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:05.336103 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:05.314697 ignition[889]: parsing config with SHA512: 048cf34dd90582b20c441ddf6f4475a9523bb969538a1b52ad174b1abda4b67236bdd94decd8e85e4bf3d842b2097762e652b1ec1f48847de2b359cf0a512573 Jan 30 14:10:05.345714 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.316841 ignition[889]: fetch-offline: fetch-offline passed Jan 30 14:10:05.355257 systemd[1]: Reached target network.target - Network. Jan 30 14:10:05.316844 ignition[889]: POST message to Packet Timeline Jan 30 14:10:05.369832 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 14:10:05.316846 ignition[889]: POST Status error: resource requires networking Jan 30 14:10:05.373883 systemd-networkd[923]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.316881 ignition[889]: Ignition finished successfully Jan 30 14:10:05.376830 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:05.586956 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 14:10:05.386999 ignition[936]: Ignition 2.19.0 Jan 30 14:10:05.578200 systemd-networkd[923]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:05.387007 ignition[936]: Stage: kargs Jan 30 14:10:05.387180 ignition[936]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:05.387192 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:05.388064 ignition[936]: kargs: kargs passed Jan 30 14:10:05.388069 ignition[936]: POST message to Packet Timeline Jan 30 14:10:05.388083 ignition[936]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:05.388725 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43432->[::1]:53: read: connection refused Jan 30 14:10:05.589828 ignition[936]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 14:10:05.590991 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58689->[::1]:53: read: connection refused Jan 30 14:10:05.818698 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 14:10:05.820069 systemd-networkd[923]: eno1: Link UP Jan 30 14:10:05.820194 systemd-networkd[923]: eno2: Link UP Jan 30 14:10:05.820314 systemd-networkd[923]: enp1s0f0np0: Link UP Jan 30 14:10:05.820453 systemd-networkd[923]: enp1s0f0np0: Gained carrier Jan 30 14:10:05.830891 systemd-networkd[923]: enp1s0f1np1: Link UP Jan 30 14:10:05.862836 systemd-networkd[923]: enp1s0f0np0: DHCPv4 address 139.178.70.199/31, gateway 139.178.70.198 acquired from 145.40.83.140 Jan 30 14:10:05.991535 ignition[936]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 14:10:05.992629 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42378->[::1]:53: read: connection refused Jan 30 14:10:06.620474 systemd-networkd[923]: enp1s0f1np1: Gained carrier Jan 30 14:10:06.793177 ignition[936]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 14:10:06.794330 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43137->[::1]:53: read: connection refused Jan 30 14:10:06.876294 systemd-networkd[923]: enp1s0f0np0: Gained IPv6LL Jan 30 14:10:07.836288 systemd-networkd[923]: enp1s0f1np1: Gained IPv6LL Jan 30 14:10:08.396045 ignition[936]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 14:10:08.397228 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52340->[::1]:53: read: connection refused Jan 30 14:10:11.600710 ignition[936]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 14:10:12.707158 ignition[936]: GET result: OK Jan 30 14:10:13.029797 ignition[936]: Ignition finished successfully Jan 30 14:10:13.034648 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:13.059072 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:13.121270 ignition[954]: Ignition 2.19.0 Jan 30 14:10:13.121288 ignition[954]: Stage: disks Jan 30 14:10:13.121634 ignition[954]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:13.121657 ignition[954]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:13.123513 ignition[954]: disks: disks passed Jan 30 14:10:13.123522 ignition[954]: POST message to Packet Timeline Jan 30 14:10:13.123550 ignition[954]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:14.788150 ignition[954]: GET result: OK Jan 30 14:10:15.211068 ignition[954]: Ignition finished successfully Jan 30 14:10:15.214270 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:15.228990 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:15.247038 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:15.268037 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:15.289131 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:15.309078 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:15.337921 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:15.372814 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:10:15.384297 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:15.394882 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:15.517666 kernel: EXT4-fs (sdb9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:15.517730 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:15.518065 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:15.554876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:15.563376 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:15.679379 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (984) Jan 30 14:10:15.679420 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:15.679437 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:15.679445 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:10:15.679452 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:10:15.679459 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:10:15.584681 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:15.696148 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 14:10:15.707877 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:15.707895 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:15.780915 coreos-metadata[986]: Jan 30 14:10:15.748 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:10:15.714840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:15.814765 coreos-metadata[1002]: Jan 30 14:10:15.748 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:10:15.744813 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:15.784978 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:15.847771 initrd-setup-root[1017]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:15.857728 initrd-setup-root[1024]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:15.867883 initrd-setup-root[1031]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:15.866860 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:15.888922 initrd-setup-root[1038]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:15.894883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:15.951867 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:15.936511 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:15.961495 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:15.983978 ignition[1105]: INFO : Ignition 2.19.0 Jan 30 14:10:15.983978 ignition[1105]: INFO : Stage: mount Jan 30 14:10:15.997893 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:15.997893 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:15.997893 ignition[1105]: INFO : mount: mount passed Jan 30 14:10:15.997893 ignition[1105]: INFO : POST message to Packet Timeline Jan 30 14:10:15.997893 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:15.991105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:17.008172 coreos-metadata[986]: Jan 30 14:10:17.008 INFO Fetch successful Jan 30 14:10:17.083762 coreos-metadata[986]: Jan 30 14:10:17.083 INFO wrote hostname ci-4081.3.0-a-feecaa3039 to /sysroot/etc/hostname Jan 30 14:10:17.085043 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:17.119961 ignition[1105]: INFO : GET result: OK Jan 30 14:10:17.451752 ignition[1105]: INFO : Ignition finished successfully Jan 30 14:10:17.454413 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:17.647627 coreos-metadata[1002]: Jan 30 14:10:17.647 INFO Fetch successful Jan 30 14:10:17.684845 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 14:10:17.684900 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 14:10:17.710836 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:17.737094 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:17.781669 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1131) Jan 30 14:10:17.811169 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:10:17.811185 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:10:17.828997 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:10:17.867254 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:10:17.867276 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:10:17.880257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:17.911103 ignition[1148]: INFO : Ignition 2.19.0 Jan 30 14:10:17.911103 ignition[1148]: INFO : Stage: files Jan 30 14:10:17.925891 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:17.925891 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:17.925891 ignition[1148]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:17.925891 ignition[1148]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:10:17.925891 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 14:10:17.915345 unknown[1148]: wrote ssh authorized keys file for user: core Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.064890 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.321011 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 14:10:18.503564 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:10:18.782768 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:10:18.782768 ignition[1148]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:18.811983 ignition[1148]: INFO : files: files passed Jan 30 14:10:18.811983 ignition[1148]: INFO : POST message to Packet Timeline Jan 30 14:10:18.811983 ignition[1148]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:20.337141 ignition[1148]: INFO : GET result: OK Jan 30 14:10:20.673933 ignition[1148]: INFO : Ignition finished successfully Jan 30 14:10:20.675259 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:20.707938 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:20.718241 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:20.728064 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:20.728121 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:20.790141 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:20.790141 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:20.803899 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:20.791406 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:20.828338 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:20.862873 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:20.915064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:20.915151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:20.935080 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:20.955876 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:20.976074 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:20.991771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:21.041688 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:21.073178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:21.123988 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:21.135002 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:21.156999 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:21.175058 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:21.175249 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:21.204415 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:21.225289 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:21.243295 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:21.261281 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:21.283397 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:21.304295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:21.324267 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:21.346322 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:21.367304 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:21.387290 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:21.405168 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:21.405577 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:21.440133 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:21.450307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:21.472274 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:21.472693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:21.495165 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:21.495567 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:21.527269 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:21.527741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:21.548481 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:21.566129 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:21.566592 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:21.587288 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:21.605296 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:21.624362 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:21.624695 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:21.644322 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:21.644624 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:21.667363 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:21.667799 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:21.687366 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:21.687774 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:21.810913 ignition[1212]: INFO : Ignition 2.19.0 Jan 30 14:10:21.810913 ignition[1212]: INFO : Stage: umount Jan 30 14:10:21.810913 ignition[1212]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:21.810913 ignition[1212]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:10:21.810913 ignition[1212]: INFO : umount: umount passed Jan 30 14:10:21.810913 ignition[1212]: INFO : POST message to Packet Timeline Jan 30 14:10:21.810913 ignition[1212]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:10:21.705385 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:21.705810 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:21.735830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:21.741547 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:21.772934 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:21.773392 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:21.801938 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:21.802012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:21.847841 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:21.848506 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:21.848600 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:21.858802 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:21.858935 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:22.195426 ignition[1212]: INFO : GET result: OK Jan 30 14:10:22.528887 ignition[1212]: INFO : Ignition finished successfully Jan 30 14:10:22.531576 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:22.531927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:22.548949 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:22.563916 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:22.564130 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:22.582021 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:22.582170 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:22.600103 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:22.600265 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:22.618091 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:22.618266 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:22.636072 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:22.636262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:22.655490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:22.665806 systemd-networkd[923]: enp1s0f0np0: DHCPv6 lease lost Jan 30 14:10:22.673209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:22.674857 systemd-networkd[923]: enp1s0f1np1: DHCPv6 lease lost Jan 30 14:10:22.693772 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:22.694053 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:22.714071 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:22.714422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:22.735371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:22.735499 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:22.773826 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:22.781801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:22.781837 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:22.801932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:22.802017 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:22.819961 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:22.820068 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:22.841076 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:22.841247 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:22.860328 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:22.879993 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:22.880406 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:22.917215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:22.917260 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:22.917965 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:22.917996 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:22.952926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:22.953011 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:22.983300 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:22.983517 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:23.012176 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:23.012386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:23.058080 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:23.063016 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:23.063212 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:23.094041 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:10:23.094204 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:23.116043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:23.116192 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:23.135004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:23.392846 systemd-journald[265]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:23.135153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:23.158145 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:23.158436 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:23.217905 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:23.218167 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:23.237940 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:23.271999 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:23.323739 systemd[1]: Switching root. Jan 30 14:10:23.465858 systemd-journald[265]: Journal stopped Jan 30 14:10:26.104341 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:10:26.104355 kernel: SELinux: policy capability open_perms=1 Jan 30 14:10:26.104362 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:10:26.104368 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:10:26.104373 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:10:26.104378 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:10:26.104384 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:10:26.104389 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:10:26.104394 kernel: audit: type=1403 audit(1738246223.704:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:10:26.104400 systemd[1]: Successfully loaded SELinux policy in 162.701ms. Jan 30 14:10:26.104408 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.253ms. Jan 30 14:10:26.104414 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:26.104420 systemd[1]: Detected architecture x86-64. Jan 30 14:10:26.104426 systemd[1]: Detected first boot. Jan 30 14:10:26.104432 systemd[1]: Hostname set to . Jan 30 14:10:26.104440 systemd[1]: Initializing machine ID from random generator. Jan 30 14:10:26.104446 zram_generator::config[1260]: No configuration found. Jan 30 14:10:26.104453 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:10:26.104459 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:10:26.104464 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:10:26.104471 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:10:26.104477 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:10:26.104484 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:10:26.104490 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:10:26.104497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:10:26.104503 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:10:26.104509 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:10:26.104516 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:10:26.104522 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:10:26.104529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:26.104535 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:26.104541 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:10:26.104548 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:10:26.104554 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:10:26.104560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:26.104566 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 30 14:10:26.104573 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:26.104580 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:10:26.104587 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:10:26.104593 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:26.104601 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:10:26.104607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:26.104614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:26.104620 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:26.104628 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:26.104634 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:10:26.104640 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:10:26.104647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:26.104653 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:26.104663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:26.104672 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:10:26.104679 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:10:26.104685 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:10:26.104692 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:10:26.104699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:26.104705 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:10:26.104712 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:10:26.104720 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:10:26.104727 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:10:26.104733 systemd[1]: Reached target machines.target - Containers. Jan 30 14:10:26.104740 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:10:26.104747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:26.104754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:26.104760 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:10:26.104767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:26.104774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:26.104781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:26.104788 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:10:26.104794 kernel: ACPI: bus type drm_connector registered Jan 30 14:10:26.104800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:26.104807 kernel: fuse: init (API version 7.39) Jan 30 14:10:26.104813 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:10:26.104819 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:10:26.104826 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:10:26.104833 kernel: loop: module loaded Jan 30 14:10:26.104839 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:10:26.104846 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:10:26.104852 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:26.104866 systemd-journald[1365]: Collecting audit messages is disabled. Jan 30 14:10:26.104881 systemd-journald[1365]: Journal started Jan 30 14:10:26.104895 systemd-journald[1365]: Runtime Journal (/run/log/journal/ff36de30c9ab4abca7e15bfbd15fc86e) is 8.0M, max 639.9M, 631.9M free. Jan 30 14:10:24.213929 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:10:24.229031 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Jan 30 14:10:24.229281 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:10:26.132738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:26.166726 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:10:26.199707 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:10:26.232702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:26.262701 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:10:26.262745 systemd[1]: Stopped verity-setup.service. Jan 30 14:10:26.329706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:26.350844 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:26.360221 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:10:26.369923 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:10:26.379918 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:10:26.389895 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:10:26.399887 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:10:26.409888 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:10:26.420018 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:10:26.431147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:26.442331 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:10:26.442603 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:10:26.454568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:26.454974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:26.466575 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:26.467045 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:26.477777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:26.478188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:26.489625 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:10:26.490042 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:10:26.500677 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:26.501075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:26.511601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:26.522601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:10:26.534551 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:10:26.546575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:26.582265 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:10:26.606963 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:10:26.620725 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:10:26.630915 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:10:26.630934 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:26.631550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:10:26.663075 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:10:26.675079 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:10:26.684999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:26.687410 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:10:26.698365 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:10:26.709805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:26.710527 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:10:26.716228 systemd-journald[1365]: Time spent on flushing to /var/log/journal/ff36de30c9ab4abca7e15bfbd15fc86e is 13.239ms for 1371 entries. Jan 30 14:10:26.716228 systemd-journald[1365]: System Journal (/var/log/journal/ff36de30c9ab4abca7e15bfbd15fc86e) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:10:26.761209 systemd-journald[1365]: Received client request to flush runtime journal. Jan 30 14:10:26.727808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:26.728438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:26.739920 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:10:26.757339 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:26.779451 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:10:26.785666 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 14:10:26.786634 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:10:26.798562 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jan 30 14:10:26.798572 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jan 30 14:10:26.814914 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:10:26.821710 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:10:26.831889 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:10:26.842853 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:10:26.853885 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:10:26.871829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:26.881665 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 14:10:26.890878 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:26.904559 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:10:26.925864 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:10:26.941303 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:10:26.953699 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 14:10:26.963515 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:10:26.964140 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:10:26.975354 udevadm[1401]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:10:26.988697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:10:27.010857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:27.018547 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Jan 30 14:10:27.018557 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Jan 30 14:10:27.027087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:27.039724 kernel: loop3: detected capacity change from 0 to 8 Jan 30 14:10:27.064782 ldconfig[1391]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:10:27.066024 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:10:27.092724 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 14:10:27.143751 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:10:27.152702 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 14:10:27.184666 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 14:10:27.185846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:27.197830 systemd-udevd[1427]: Using default interface naming scheme 'v255'. Jan 30 14:10:27.213109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:27.214592 (sd-merge)[1424]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 30 14:10:27.214779 kernel: loop7: detected capacity change from 0 to 8 Jan 30 14:10:27.214891 (sd-merge)[1424]: Merged extensions into '/usr'. Jan 30 14:10:27.235216 systemd[1]: Reloading requested from client PID 1396 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:10:27.235227 systemd[1]: Reloading... Jan 30 14:10:27.252346 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 30 14:10:27.252600 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1439) Jan 30 14:10:27.252625 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 14:10:27.286954 zram_generator::config[1536]: No configuration found. Jan 30 14:10:27.287062 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 14:10:27.331673 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:10:27.371688 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:10:27.383670 kernel: IPMI message handler: version 39.2 Jan 30 14:10:27.383719 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 30 14:10:27.431521 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 30 14:10:27.431606 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 30 14:10:27.436881 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 30 14:10:27.499069 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Jan 30 14:10:27.499168 kernel: i2c i2c-0: Successfully instantiated SPD at 0x50 Jan 30 14:10:27.414504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:27.467531 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 30 14:10:27.467840 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 14:10:27.509664 kernel: iTCO_vendor_support: vendor-support=0 Jan 30 14:10:27.509688 kernel: ipmi device interface Jan 30 14:10:27.516807 systemd[1]: Reloading finished in 281 ms. Jan 30 14:10:27.553360 kernel: ipmi_si: IPMI System Interface driver Jan 30 14:10:27.553417 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 30 14:10:27.566637 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 30 14:10:27.642847 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 30 14:10:27.642922 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 30 14:10:27.642934 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 30 14:10:27.642942 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 30 14:10:27.712066 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 30 14:10:27.712146 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 30 14:10:27.712215 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 30 14:10:27.712228 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 30 14:10:27.759669 kernel: intel_rapl_common: Found RAPL domain package Jan 30 14:10:27.759715 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 30 14:10:27.759815 kernel: intel_rapl_common: Found RAPL domain core Jan 30 14:10:27.759828 kernel: intel_rapl_common: Found RAPL domain dram Jan 30 14:10:27.828995 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:10:27.869669 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 30 14:10:27.874900 systemd[1]: Starting ensure-sysext.service... Jan 30 14:10:27.882217 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:10:27.893605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:27.903220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:27.903850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:27.905345 systemd[1]: Reloading requested from client PID 1607 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:10:27.905352 systemd[1]: Reloading... Jan 30 14:10:27.931671 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 30 14:10:27.931826 zram_generator::config[1638]: No configuration found. Jan 30 14:10:27.958704 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 30 14:10:27.979799 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:10:27.980014 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:10:27.980510 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:10:27.980686 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jan 30 14:10:27.980723 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jan 30 14:10:27.982221 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:27.982225 systemd-tmpfiles[1611]: Skipping /boot Jan 30 14:10:27.986369 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:27.986373 systemd-tmpfiles[1611]: Skipping /boot Jan 30 14:10:28.012222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:28.065388 systemd[1]: Reloading finished in 159 ms. Jan 30 14:10:28.079221 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:10:28.100982 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:10:28.111908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:28.122882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:28.147852 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:28.158647 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:10:28.165356 augenrules[1720]: No rules Jan 30 14:10:28.170471 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:10:28.183629 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:10:28.190849 lvm[1725]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:28.207584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:28.218396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:10:28.231706 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:10:28.241448 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:28.251037 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:10:28.262058 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:10:28.272126 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:10:28.283011 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:10:28.292148 systemd-networkd[1609]: lo: Link UP Jan 30 14:10:28.292151 systemd-networkd[1609]: lo: Gained carrier Jan 30 14:10:28.294886 systemd-networkd[1609]: bond0: netdev ready Jan 30 14:10:28.295811 systemd-networkd[1609]: Enumeration completed Jan 30 14:10:28.295923 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:28.297623 systemd-networkd[1609]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jan 30 14:10:28.309400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:28.319817 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:28.319932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:28.324304 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:10:28.336360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:28.338388 lvm[1744]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:28.346394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:28.353513 systemd-resolved[1727]: Positive Trust Anchors: Jan 30 14:10:28.353519 systemd-resolved[1727]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:28.353544 systemd-resolved[1727]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:28.356063 systemd-resolved[1727]: Using system hostname 'ci-4081.3.0-a-feecaa3039'. Jan 30 14:10:28.366128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:28.375794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:28.376556 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:10:28.388484 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:10:28.397855 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:28.397928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:28.398929 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:10:28.410222 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:10:28.421111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:28.421208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:28.432144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:28.432235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:28.444097 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:28.444188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:28.454243 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:10:28.469151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:28.469352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:28.485703 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 14:10:28.491000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:28.508732 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 30 14:10:28.510034 systemd-networkd[1609]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2d.network. Jan 30 14:10:28.520047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:28.533058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:28.543822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:28.543932 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:28.543980 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:28.544420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:28.554044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:28.554116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:28.564990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:28.565060 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:28.575984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:28.576054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:28.588139 systemd[1]: Reached target network.target - Network. Jan 30 14:10:28.596891 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:28.607881 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:28.608061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:28.623039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:28.633572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:28.643497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:28.655672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:28.668633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:28.669065 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:28.669354 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:10:28.672818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:28.673200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:28.688747 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 14:10:28.689940 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:28.690322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:28.722413 systemd-networkd[1609]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 30 14:10:28.722704 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 30 14:10:28.725273 systemd-networkd[1609]: enp1s0f0np0: Link UP Jan 30 14:10:28.726014 systemd-networkd[1609]: enp1s0f0np0: Gained carrier Jan 30 14:10:28.741269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:28.741402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:28.744784 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 30 14:10:28.755152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:28.755251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:28.760403 systemd-networkd[1609]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jan 30 14:10:28.760596 systemd-networkd[1609]: enp1s0f1np1: Link UP Jan 30 14:10:28.760812 systemd-networkd[1609]: enp1s0f1np1: Gained carrier Jan 30 14:10:28.766009 systemd[1]: Finished ensure-sysext.service. Jan 30 14:10:28.770858 systemd-networkd[1609]: bond0: Link UP Jan 30 14:10:28.771015 systemd-networkd[1609]: bond0: Gained carrier Jan 30 14:10:28.775146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:28.775180 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:28.786836 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:10:28.832031 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:10:28.851556 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Jan 30 14:10:28.851581 kernel: bond0: active interface up! Jan 30 14:10:28.862766 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:28.872783 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:10:28.883756 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:10:28.895752 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:10:28.906729 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:10:28.906746 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:28.914739 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:10:28.924816 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:10:28.934779 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:10:28.945743 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:28.953946 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:10:28.965832 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:10:28.984707 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jan 30 14:10:28.994544 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:10:29.003999 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:10:29.013746 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:29.023691 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:29.031707 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:29.031719 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:29.039723 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:10:29.050309 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:10:29.060170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:10:29.069272 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:10:29.072516 coreos-metadata[1780]: Jan 30 14:10:29.072 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:10:29.078394 dbus-daemon[1781]: [system] SELinux support is enabled Jan 30 14:10:29.079432 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:10:29.081153 jq[1784]: false Jan 30 14:10:29.088773 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:10:29.089451 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:10:29.096513 extend-filesystems[1786]: Found loop4 Jan 30 14:10:29.096513 extend-filesystems[1786]: Found loop5 Jan 30 14:10:29.151872 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jan 30 14:10:29.151892 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1449) Jan 30 14:10:29.099436 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:10:29.152157 extend-filesystems[1786]: Found loop6 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found loop7 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sda Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb1 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb2 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb3 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found usr Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb4 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb6 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb7 Jan 30 14:10:29.152157 extend-filesystems[1786]: Found sdb9 Jan 30 14:10:29.152157 extend-filesystems[1786]: Checking size of /dev/sdb9 Jan 30 14:10:29.152157 extend-filesystems[1786]: Resized partition /dev/sdb9 Jan 30 14:10:29.301902 extend-filesystems[1794]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:10:29.162540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:10:29.200134 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:10:29.238836 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:10:29.242605 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 30 14:10:29.270088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:10:29.322302 sshd_keygen[1810]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:10:29.277781 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:10:29.322374 update_engine[1811]: I20250130 14:10:29.285319 1811 main.cc:92] Flatcar Update Engine starting Jan 30 14:10:29.322374 update_engine[1811]: I20250130 14:10:29.285965 1811 update_check_scheduler.cc:74] Next update check in 3m7s Jan 30 14:10:29.279729 systemd-logind[1806]: Watching system buttons on /dev/input/event3 (Power Button) Jan 30 14:10:29.322618 jq[1812]: true Jan 30 14:10:29.279738 systemd-logind[1806]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 14:10:29.279748 systemd-logind[1806]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 30 14:10:29.280019 systemd-logind[1806]: New seat seat0. Jan 30 14:10:29.294389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:10:29.313941 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:10:29.332972 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:10:29.350841 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:10:29.350929 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:10:29.351110 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:10:29.351190 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:10:29.361097 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:10:29.361180 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:10:29.371835 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:10:29.397577 jq[1823]: true Jan 30 14:10:29.398336 (ntainerd)[1824]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:10:29.401855 dbus-daemon[1781]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 14:10:29.403273 tar[1821]: linux-amd64/helm Jan 30 14:10:29.409196 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 30 14:10:29.409319 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 30 14:10:29.411284 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:10:29.427820 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:10:29.435766 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:10:29.435867 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:10:29.446814 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:10:29.446911 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:10:29.470798 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:10:29.474262 bash[1851]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:29.483036 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:10:29.492391 locksmithd[1859]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:10:29.494016 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:10:29.494109 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:10:29.509872 systemd[1]: Starting sshkeys.service... Jan 30 14:10:29.517468 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:10:29.529717 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:10:29.549962 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:10:29.560965 coreos-metadata[1873]: Jan 30 14:10:29.560 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:10:29.561162 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:10:29.570562 containerd[1824]: time="2025-01-30T14:10:29.570514254Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:10:29.575640 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:10:29.583683 containerd[1824]: time="2025-01-30T14:10:29.583656546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584456 containerd[1824]: time="2025-01-30T14:10:29.584437972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584456 containerd[1824]: time="2025-01-30T14:10:29.584455406Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:10:29.584523 containerd[1824]: time="2025-01-30T14:10:29.584465097Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:10:29.584554 containerd[1824]: time="2025-01-30T14:10:29.584548636Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:10:29.584585 containerd[1824]: time="2025-01-30T14:10:29.584558456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584610 containerd[1824]: time="2025-01-30T14:10:29.584593826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584610 containerd[1824]: time="2025-01-30T14:10:29.584602562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584715 containerd[1824]: time="2025-01-30T14:10:29.584703106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584715 containerd[1824]: time="2025-01-30T14:10:29.584713498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584766 containerd[1824]: time="2025-01-30T14:10:29.584721389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584766 containerd[1824]: time="2025-01-30T14:10:29.584726930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584826 containerd[1824]: time="2025-01-30T14:10:29.584772631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584907 containerd[1824]: time="2025-01-30T14:10:29.584897354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584970 containerd[1824]: time="2025-01-30T14:10:29.584959673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:29.584970 containerd[1824]: time="2025-01-30T14:10:29.584969028Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:10:29.585021 containerd[1824]: time="2025-01-30T14:10:29.585010960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:10:29.585045 containerd[1824]: time="2025-01-30T14:10:29.585037958Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:10:29.585581 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 30 14:10:29.591345 containerd[1824]: time="2025-01-30T14:10:29.591330943Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:10:29.591392 containerd[1824]: time="2025-01-30T14:10:29.591355085Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:10:29.591392 containerd[1824]: time="2025-01-30T14:10:29.591365769Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:10:29.591392 containerd[1824]: time="2025-01-30T14:10:29.591379029Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:10:29.591392 containerd[1824]: time="2025-01-30T14:10:29.591388326Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:10:29.591490 containerd[1824]: time="2025-01-30T14:10:29.591462575Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:10:29.591613 containerd[1824]: time="2025-01-30T14:10:29.591599219Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:10:29.591686 containerd[1824]: time="2025-01-30T14:10:29.591674450Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:10:29.591720 containerd[1824]: time="2025-01-30T14:10:29.591686298Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:10:29.591720 containerd[1824]: time="2025-01-30T14:10:29.591694828Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:10:29.591720 containerd[1824]: time="2025-01-30T14:10:29.591704065Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591720 containerd[1824]: time="2025-01-30T14:10:29.591711650Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591720 containerd[1824]: time="2025-01-30T14:10:29.591718524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591726081Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591733886Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591741303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591748377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591754734Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591767325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591774953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591781632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591788631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591795648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591806834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591814362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591821247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.591844 containerd[1824]: time="2025-01-30T14:10:29.591827971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591836190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591843260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591850172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591857142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591864933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591877879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591884927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591890736Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591916110Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591925506Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591932161Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591938936Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:10:29.592174 containerd[1824]: time="2025-01-30T14:10:29.591944185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592483 containerd[1824]: time="2025-01-30T14:10:29.591950890Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:10:29.592483 containerd[1824]: time="2025-01-30T14:10:29.591958899Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:10:29.592483 containerd[1824]: time="2025-01-30T14:10:29.591964940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:10:29.592559 containerd[1824]: time="2025-01-30T14:10:29.592118553Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:10:29.592559 containerd[1824]: time="2025-01-30T14:10:29.592152219Z" level=info msg="Connect containerd service" Jan 30 14:10:29.592559 containerd[1824]: time="2025-01-30T14:10:29.592170215Z" level=info msg="using legacy CRI server" Jan 30 14:10:29.592559 containerd[1824]: time="2025-01-30T14:10:29.592174360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:10:29.592559 containerd[1824]: time="2025-01-30T14:10:29.592223403Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:10:29.592559 containerd[1824]: time="2025-01-30T14:10:29.592507347Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592596169Z" level=info msg="Start subscribing containerd event" Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592624203Z" level=info msg="Start recovering state" Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592671578Z" level=info msg="Start event monitor" Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592674986Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592679970Z" level=info msg="Start snapshots syncer" Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592689446Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592693274Z" level=info msg="Start streaming server" Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592702677Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:10:29.592802 containerd[1824]: time="2025-01-30T14:10:29.592728121Z" level=info msg="containerd successfully booted in 0.022862s" Jan 30 14:10:29.595905 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:10:29.605021 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:10:29.685354 tar[1821]: linux-amd64/LICENSE Jan 30 14:10:29.685354 tar[1821]: linux-amd64/README.md Jan 30 14:10:29.693448 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:10:29.862693 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jan 30 14:10:29.893096 extend-filesystems[1794]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jan 30 14:10:29.893096 extend-filesystems[1794]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 30 14:10:29.893096 extend-filesystems[1794]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jan 30 14:10:29.933889 extend-filesystems[1786]: Resized filesystem in /dev/sdb9 Jan 30 14:10:29.893571 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:10:29.893689 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:10:30.171986 systemd-networkd[1609]: bond0: Gained IPv6LL Jan 30 14:10:30.177281 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:10:30.191985 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:10:30.221823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:30.233445 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:10:30.263267 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:10:30.929494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:30.941196 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:31.456089 kubelet[1913]: E0130 14:10:31.456016 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:31.457175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:31.457250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:31.611739 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 30 14:10:31.611912 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 30 14:10:32.382713 systemd-resolved[1727]: Clock change detected. Flushing caches. Jan 30 14:10:32.382725 systemd-timesyncd[1774]: Contacted time server 104.234.61.117:123 (0.flatcar.pool.ntp.org). Jan 30 14:10:32.382767 systemd-timesyncd[1774]: Initial clock synchronization to Thu 2025-01-30 14:10:32.382660 UTC. Jan 30 14:10:32.597136 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:10:32.614404 systemd[1]: Started sshd@0-139.178.70.199:22-147.75.109.163:50684.service - OpenSSH per-connection server daemon (147.75.109.163:50684). Jan 30 14:10:32.655610 sshd[1935]: Accepted publickey for core from 147.75.109.163 port 50684 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:32.657016 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:32.662762 systemd-logind[1806]: New session 1 of user core. Jan 30 14:10:32.663550 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:10:32.691576 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:10:32.704094 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:10:32.727505 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:10:32.752274 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:10:32.862659 systemd[1939]: Queued start job for default target default.target. Jan 30 14:10:32.874018 systemd[1939]: Created slice app.slice - User Application Slice. Jan 30 14:10:32.874036 systemd[1939]: Reached target paths.target - Paths. Jan 30 14:10:32.874046 systemd[1939]: Reached target timers.target - Timers. Jan 30 14:10:32.874826 systemd[1939]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:10:32.880701 systemd[1939]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:10:32.880729 systemd[1939]: Reached target sockets.target - Sockets. Jan 30 14:10:32.880738 systemd[1939]: Reached target basic.target - Basic System. Jan 30 14:10:32.880759 systemd[1939]: Reached target default.target - Main User Target. Jan 30 14:10:32.880775 systemd[1939]: Startup finished in 118ms. Jan 30 14:10:32.880860 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:10:32.892210 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:10:32.968056 systemd[1]: Started sshd@1-139.178.70.199:22-147.75.109.163:50688.service - OpenSSH per-connection server daemon (147.75.109.163:50688). Jan 30 14:10:33.013935 sshd[1950]: Accepted publickey for core from 147.75.109.163 port 50688 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:33.014625 sshd[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:33.017097 systemd-logind[1806]: New session 2 of user core. Jan 30 14:10:33.032297 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:10:33.094502 sshd[1950]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:33.106216 systemd[1]: sshd@1-139.178.70.199:22-147.75.109.163:50688.service: Deactivated successfully. Jan 30 14:10:33.107922 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:10:33.108611 systemd-logind[1806]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:10:33.118563 systemd[1]: Started sshd@2-139.178.70.199:22-147.75.109.163:50702.service - OpenSSH per-connection server daemon (147.75.109.163:50702). Jan 30 14:10:33.129989 systemd-logind[1806]: Removed session 2. Jan 30 14:10:33.146163 sshd[1957]: Accepted publickey for core from 147.75.109.163 port 50702 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:33.147070 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:33.150140 systemd-logind[1806]: New session 3 of user core. Jan 30 14:10:33.151217 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:10:33.216158 sshd[1957]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:33.217600 systemd[1]: sshd@2-139.178.70.199:22-147.75.109.163:50702.service: Deactivated successfully. Jan 30 14:10:33.218449 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:10:33.219054 systemd-logind[1806]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:10:33.219798 systemd-logind[1806]: Removed session 3. Jan 30 14:10:33.433345 coreos-metadata[1780]: Jan 30 14:10:33.433 INFO Fetch successful Jan 30 14:10:33.528590 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:10:33.552492 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 30 14:10:33.633576 coreos-metadata[1873]: Jan 30 14:10:33.633 INFO Fetch successful Jan 30 14:10:33.715756 unknown[1873]: wrote ssh authorized keys file for user: core Jan 30 14:10:33.741631 update-ssh-keys[1970]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:33.741900 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:10:33.754957 systemd[1]: Finished sshkeys.service. Jan 30 14:10:33.926042 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 30 14:10:33.939836 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:10:33.949863 systemd[1]: Startup finished in 2.728s (kernel) + 24.698s (initrd) + 9.990s (userspace) = 37.417s. Jan 30 14:10:33.970030 login[1888]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:10:33.972955 systemd-logind[1806]: New session 4 of user core. Jan 30 14:10:33.987380 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:10:33.995467 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:10:33.998174 systemd-logind[1806]: New session 5 of user core. Jan 30 14:10:33.998855 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:10:41.973071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:10:41.986406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:42.205625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:42.207803 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:42.233396 kubelet[2006]: E0130 14:10:42.233287 2006 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:42.235497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:42.235581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:43.241130 systemd[1]: Started sshd@3-139.178.70.199:22-147.75.109.163:59170.service - OpenSSH per-connection server daemon (147.75.109.163:59170). Jan 30 14:10:43.270539 sshd[2024]: Accepted publickey for core from 147.75.109.163 port 59170 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:43.271351 sshd[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:43.274198 systemd-logind[1806]: New session 6 of user core. Jan 30 14:10:43.293545 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:10:43.349385 sshd[2024]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:43.364844 systemd[1]: sshd@3-139.178.70.199:22-147.75.109.163:59170.service: Deactivated successfully. Jan 30 14:10:43.365691 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:10:43.366471 systemd-logind[1806]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:10:43.367200 systemd[1]: Started sshd@4-139.178.70.199:22-147.75.109.163:59186.service - OpenSSH per-connection server daemon (147.75.109.163:59186). Jan 30 14:10:43.367743 systemd-logind[1806]: Removed session 6. Jan 30 14:10:43.401526 sshd[2031]: Accepted publickey for core from 147.75.109.163 port 59186 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:43.403421 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:43.411776 systemd-logind[1806]: New session 7 of user core. Jan 30 14:10:43.423556 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:10:43.484034 sshd[2031]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:43.509882 systemd[1]: sshd@4-139.178.70.199:22-147.75.109.163:59186.service: Deactivated successfully. Jan 30 14:10:43.514487 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:10:43.517723 systemd-logind[1806]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:10:43.530454 systemd[1]: Started sshd@5-139.178.70.199:22-147.75.109.163:59188.service - OpenSSH per-connection server daemon (147.75.109.163:59188). Jan 30 14:10:43.530903 systemd-logind[1806]: Removed session 7. Jan 30 14:10:43.557982 sshd[2038]: Accepted publickey for core from 147.75.109.163 port 59188 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:43.558772 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:43.561709 systemd-logind[1806]: New session 8 of user core. Jan 30 14:10:43.572363 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:10:43.626545 sshd[2038]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:43.644900 systemd[1]: sshd@5-139.178.70.199:22-147.75.109.163:59188.service: Deactivated successfully. Jan 30 14:10:43.645774 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:10:43.646613 systemd-logind[1806]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:10:43.647494 systemd[1]: Started sshd@6-139.178.70.199:22-147.75.109.163:59190.service - OpenSSH per-connection server daemon (147.75.109.163:59190). Jan 30 14:10:43.648065 systemd-logind[1806]: Removed session 8. Jan 30 14:10:43.684073 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 59190 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:43.686064 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:43.695266 systemd-logind[1806]: New session 9 of user core. Jan 30 14:10:43.712559 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:10:43.802461 sudo[2048]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:10:43.802612 sudo[2048]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:43.816848 sudo[2048]: pam_unix(sudo:session): session closed for user root Jan 30 14:10:43.817887 sshd[2045]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:43.838405 systemd[1]: sshd@6-139.178.70.199:22-147.75.109.163:59190.service: Deactivated successfully. Jan 30 14:10:43.839584 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:10:43.840746 systemd-logind[1806]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:10:43.841900 systemd[1]: Started sshd@7-139.178.70.199:22-147.75.109.163:59200.service - OpenSSH per-connection server daemon (147.75.109.163:59200). Jan 30 14:10:43.842734 systemd-logind[1806]: Removed session 9. Jan 30 14:10:43.884462 sshd[2053]: Accepted publickey for core from 147.75.109.163 port 59200 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:43.885164 sshd[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:43.887643 systemd-logind[1806]: New session 10 of user core. Jan 30 14:10:43.906361 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:10:43.969227 sudo[2057]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:10:43.970039 sudo[2057]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:43.981447 sudo[2057]: pam_unix(sudo:session): session closed for user root Jan 30 14:10:43.986837 sudo[2056]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:10:43.987159 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:44.004483 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:44.005512 auditctl[2060]: No rules Jan 30 14:10:44.005703 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:10:44.005812 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:44.007179 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:44.026319 augenrules[2078]: No rules Jan 30 14:10:44.026876 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:44.027671 sudo[2056]: pam_unix(sudo:session): session closed for user root Jan 30 14:10:44.029010 sshd[2053]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:44.032177 systemd[1]: sshd@7-139.178.70.199:22-147.75.109.163:59200.service: Deactivated successfully. Jan 30 14:10:44.033467 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:10:44.034085 systemd-logind[1806]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:10:44.035928 systemd[1]: Started sshd@8-139.178.70.199:22-147.75.109.163:59216.service - OpenSSH per-connection server daemon (147.75.109.163:59216). Jan 30 14:10:44.036947 systemd-logind[1806]: Removed session 10. Jan 30 14:10:44.082771 sshd[2086]: Accepted publickey for core from 147.75.109.163 port 59216 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:10:44.084427 sshd[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:44.089998 systemd-logind[1806]: New session 11 of user core. Jan 30 14:10:44.109724 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:10:44.179764 sudo[2089]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:10:44.180622 sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:44.554291 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:10:44.554380 (dockerd)[2115]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:10:44.847191 dockerd[2115]: time="2025-01-30T14:10:44.847052978Z" level=info msg="Starting up" Jan 30 14:10:44.912958 dockerd[2115]: time="2025-01-30T14:10:44.912910674Z" level=info msg="Loading containers: start." Jan 30 14:10:45.007115 kernel: Initializing XFRM netlink socket Jan 30 14:10:45.066726 systemd-networkd[1609]: docker0: Link UP Jan 30 14:10:45.079837 dockerd[2115]: time="2025-01-30T14:10:45.079817873Z" level=info msg="Loading containers: done." Jan 30 14:10:45.086638 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck806729396-merged.mount: Deactivated successfully. Jan 30 14:10:45.087960 dockerd[2115]: time="2025-01-30T14:10:45.087909199Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:10:45.087960 dockerd[2115]: time="2025-01-30T14:10:45.087955211Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:10:45.088019 dockerd[2115]: time="2025-01-30T14:10:45.088006201Z" level=info msg="Daemon has completed initialization" Jan 30 14:10:45.103261 dockerd[2115]: time="2025-01-30T14:10:45.103183552Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:10:45.103268 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:10:45.929286 containerd[1824]: time="2025-01-30T14:10:45.929211117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:10:46.528447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237568452.mount: Deactivated successfully. Jan 30 14:10:47.378647 containerd[1824]: time="2025-01-30T14:10:47.378592192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:47.378875 containerd[1824]: time="2025-01-30T14:10:47.378701389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 14:10:47.379224 containerd[1824]: time="2025-01-30T14:10:47.379181920Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:47.380815 containerd[1824]: time="2025-01-30T14:10:47.380773616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:47.381470 containerd[1824]: time="2025-01-30T14:10:47.381427221Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.452194372s" Jan 30 14:10:47.381470 containerd[1824]: time="2025-01-30T14:10:47.381446230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 14:10:47.392659 containerd[1824]: time="2025-01-30T14:10:47.392614213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:10:48.509214 containerd[1824]: time="2025-01-30T14:10:48.509154295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:48.509443 containerd[1824]: time="2025-01-30T14:10:48.509352838Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 14:10:48.509806 containerd[1824]: time="2025-01-30T14:10:48.509767739Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:48.512015 containerd[1824]: time="2025-01-30T14:10:48.511973396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:48.512626 containerd[1824]: time="2025-01-30T14:10:48.512585070Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.119949553s" Jan 30 14:10:48.512626 containerd[1824]: time="2025-01-30T14:10:48.512601728Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 14:10:48.524244 containerd[1824]: time="2025-01-30T14:10:48.524223571Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:10:49.373245 containerd[1824]: time="2025-01-30T14:10:49.373186646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:49.373471 containerd[1824]: time="2025-01-30T14:10:49.373423907Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 14:10:49.373829 containerd[1824]: time="2025-01-30T14:10:49.373788339Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:49.375745 containerd[1824]: time="2025-01-30T14:10:49.375702191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:49.376268 containerd[1824]: time="2025-01-30T14:10:49.376226025Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 851.981625ms" Jan 30 14:10:49.376268 containerd[1824]: time="2025-01-30T14:10:49.376243695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 14:10:49.387044 containerd[1824]: time="2025-01-30T14:10:49.387024549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:10:50.199176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556583563.mount: Deactivated successfully. Jan 30 14:10:50.363980 containerd[1824]: time="2025-01-30T14:10:50.363953674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:50.364202 containerd[1824]: time="2025-01-30T14:10:50.364109306Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 14:10:50.364497 containerd[1824]: time="2025-01-30T14:10:50.364484138Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:50.365349 containerd[1824]: time="2025-01-30T14:10:50.365337196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:50.366013 containerd[1824]: time="2025-01-30T14:10:50.366001114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 978.955408ms" Jan 30 14:10:50.366039 containerd[1824]: time="2025-01-30T14:10:50.366016863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 14:10:50.376942 containerd[1824]: time="2025-01-30T14:10:50.376921196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:10:50.891011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424193639.mount: Deactivated successfully. Jan 30 14:10:51.390474 containerd[1824]: time="2025-01-30T14:10:51.390419737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:51.390698 containerd[1824]: time="2025-01-30T14:10:51.390585309Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 14:10:51.391079 containerd[1824]: time="2025-01-30T14:10:51.391039142Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:51.393713 containerd[1824]: time="2025-01-30T14:10:51.393670695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:51.394226 containerd[1824]: time="2025-01-30T14:10:51.394179039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.017238299s" Jan 30 14:10:51.394226 containerd[1824]: time="2025-01-30T14:10:51.394196193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 14:10:51.405426 containerd[1824]: time="2025-01-30T14:10:51.405403255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:10:51.882048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378024412.mount: Deactivated successfully. Jan 30 14:10:51.883529 containerd[1824]: time="2025-01-30T14:10:51.883511917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:51.883660 containerd[1824]: time="2025-01-30T14:10:51.883640795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 14:10:51.884119 containerd[1824]: time="2025-01-30T14:10:51.884108781Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:51.885335 containerd[1824]: time="2025-01-30T14:10:51.885322913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:51.885796 containerd[1824]: time="2025-01-30T14:10:51.885785169Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 480.361612ms" Jan 30 14:10:51.885821 containerd[1824]: time="2025-01-30T14:10:51.885798835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 14:10:51.897682 containerd[1824]: time="2025-01-30T14:10:51.897663361Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:10:52.392447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:10:52.408311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:52.409504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604270530.mount: Deactivated successfully. Jan 30 14:10:52.620952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:52.623282 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:52.660223 kubelet[2513]: E0130 14:10:52.658488 2513 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:52.659686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:52.659764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:53.707752 containerd[1824]: time="2025-01-30T14:10:53.707712021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:53.708038 containerd[1824]: time="2025-01-30T14:10:53.707906441Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 14:10:53.708457 containerd[1824]: time="2025-01-30T14:10:53.708442415Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:53.710406 containerd[1824]: time="2025-01-30T14:10:53.710355248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:53.710935 containerd[1824]: time="2025-01-30T14:10:53.710919935Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.813236457s" Jan 30 14:10:53.710981 containerd[1824]: time="2025-01-30T14:10:53.710936394Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 14:10:56.469987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:56.478488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:56.487411 systemd[1]: Reloading requested from client PID 2722 ('systemctl') (unit session-11.scope)... Jan 30 14:10:56.487419 systemd[1]: Reloading... Jan 30 14:10:56.530157 zram_generator::config[2761]: No configuration found. Jan 30 14:10:56.595601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:56.655789 systemd[1]: Reloading finished in 168 ms. Jan 30 14:10:56.697428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:56.698615 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:56.699713 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:10:56.699818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:56.700709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:56.899276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:56.904814 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:10:56.926608 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:10:56.926608 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:10:56.926608 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:10:56.926840 kubelet[2830]: I0130 14:10:56.926617 2830 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:10:57.244694 kubelet[2830]: I0130 14:10:57.244648 2830 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:10:57.244694 kubelet[2830]: I0130 14:10:57.244662 2830 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:10:57.244795 kubelet[2830]: I0130 14:10:57.244781 2830 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:10:57.255356 kubelet[2830]: I0130 14:10:57.255321 2830 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:10:57.259028 kubelet[2830]: E0130 14:10:57.258984 2830 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.271060 kubelet[2830]: I0130 14:10:57.271028 2830 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:10:57.271689 kubelet[2830]: I0130 14:10:57.271644 2830 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:10:57.271784 kubelet[2830]: I0130 14:10:57.271662 2830 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-feecaa3039","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:10:57.271784 kubelet[2830]: I0130 14:10:57.271763 2830 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:10:57.271784 kubelet[2830]: I0130 14:10:57.271770 2830 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:10:57.271886 kubelet[2830]: I0130 14:10:57.271822 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:10:57.272531 kubelet[2830]: I0130 14:10:57.272489 2830 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:10:57.272531 kubelet[2830]: I0130 14:10:57.272498 2830 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:10:57.272531 kubelet[2830]: I0130 14:10:57.272509 2830 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:10:57.272531 kubelet[2830]: I0130 14:10:57.272517 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:10:57.272913 kubelet[2830]: W0130 14:10:57.272844 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.272913 kubelet[2830]: E0130 14:10:57.272890 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.272913 kubelet[2830]: W0130 14:10:57.272886 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-feecaa3039&limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.272913 kubelet[2830]: E0130 14:10:57.272911 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-feecaa3039&limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.275805 kubelet[2830]: I0130 14:10:57.275740 2830 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:10:57.276848 kubelet[2830]: I0130 14:10:57.276813 2830 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:10:57.276848 kubelet[2830]: W0130 14:10:57.276839 2830 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:10:57.277304 kubelet[2830]: I0130 14:10:57.277228 2830 server.go:1264] "Started kubelet" Jan 30 14:10:57.277304 kubelet[2830]: I0130 14:10:57.277282 2830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:10:57.277346 kubelet[2830]: I0130 14:10:57.277307 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:10:57.277515 kubelet[2830]: I0130 14:10:57.277468 2830 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:10:57.280796 kubelet[2830]: E0130 14:10:57.280780 2830 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:10:57.280991 kubelet[2830]: I0130 14:10:57.280981 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:10:57.281052 kubelet[2830]: I0130 14:10:57.281040 2830 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:10:57.281442 kubelet[2830]: E0130 14:10:57.281059 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:57.281442 kubelet[2830]: I0130 14:10:57.281079 2830 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:10:57.281442 kubelet[2830]: E0130 14:10:57.281333 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-feecaa3039?timeout=10s\": dial tcp 139.178.70.199:6443: connect: connection refused" interval="200ms" Jan 30 14:10:57.281557 kubelet[2830]: I0130 14:10:57.281545 2830 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:10:57.281589 kubelet[2830]: I0130 14:10:57.281562 2830 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:10:57.281589 kubelet[2830]: I0130 14:10:57.281579 2830 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:10:57.281649 kubelet[2830]: I0130 14:10:57.281638 2830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:10:57.281744 kubelet[2830]: W0130 14:10:57.281717 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.281781 kubelet[2830]: E0130 14:10:57.281756 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.282197 kubelet[2830]: I0130 14:10:57.282188 2830 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:10:57.283647 kubelet[2830]: E0130 14:10:57.283560 2830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.199:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-feecaa3039.181f7dc2e9292c4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-feecaa3039,UID:ci-4081.3.0-a-feecaa3039,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-feecaa3039,},FirstTimestamp:2025-01-30 14:10:57.277217869 +0000 UTC m=+0.370560540,LastTimestamp:2025-01-30 14:10:57.277217869 +0000 UTC m=+0.370560540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-feecaa3039,}" Jan 30 14:10:57.289838 kubelet[2830]: I0130 14:10:57.289762 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:10:57.290421 kubelet[2830]: I0130 14:10:57.290360 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:10:57.290421 kubelet[2830]: I0130 14:10:57.290376 2830 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:10:57.290421 kubelet[2830]: I0130 14:10:57.290385 2830 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:10:57.290487 kubelet[2830]: E0130 14:10:57.290430 2830 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:10:57.290795 kubelet[2830]: W0130 14:10:57.290778 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.290835 kubelet[2830]: E0130 14:10:57.290800 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:57.297988 kubelet[2830]: I0130 14:10:57.297948 2830 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:10:57.297988 kubelet[2830]: I0130 14:10:57.297955 2830 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:10:57.297988 kubelet[2830]: I0130 14:10:57.297977 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:10:57.298857 kubelet[2830]: I0130 14:10:57.298822 2830 policy_none.go:49] "None policy: Start" Jan 30 14:10:57.299053 kubelet[2830]: I0130 14:10:57.299022 2830 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:10:57.299053 kubelet[2830]: I0130 14:10:57.299032 2830 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:10:57.301504 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:10:57.324415 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:10:57.326870 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:10:57.341324 kubelet[2830]: I0130 14:10:57.341267 2830 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:10:57.341586 kubelet[2830]: I0130 14:10:57.341516 2830 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:10:57.341725 kubelet[2830]: I0130 14:10:57.341677 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:10:57.342593 kubelet[2830]: E0130 14:10:57.342574 2830 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:57.385237 kubelet[2830]: I0130 14:10:57.385186 2830 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.385920 kubelet[2830]: E0130 14:10:57.385866 2830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.199:6443/api/v1/nodes\": dial tcp 139.178.70.199:6443: connect: connection refused" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.391178 kubelet[2830]: I0130 14:10:57.391066 2830 topology_manager.go:215] "Topology Admit Handler" podUID="1df8ce73b0f99eea3fd108b8a8e7bdfa" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.394614 kubelet[2830]: I0130 14:10:57.394567 2830 topology_manager.go:215] "Topology Admit Handler" podUID="1398cca636796dfee51a63859d626320" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.398744 kubelet[2830]: I0130 14:10:57.398697 2830 topology_manager.go:215] "Topology Admit Handler" podUID="a321018dba2e12be498d6b5e7fbc6357" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.414312 systemd[1]: Created slice kubepods-burstable-pod1df8ce73b0f99eea3fd108b8a8e7bdfa.slice - libcontainer container kubepods-burstable-pod1df8ce73b0f99eea3fd108b8a8e7bdfa.slice. Jan 30 14:10:57.440482 systemd[1]: Created slice kubepods-burstable-pod1398cca636796dfee51a63859d626320.slice - libcontainer container kubepods-burstable-pod1398cca636796dfee51a63859d626320.slice. Jan 30 14:10:57.460885 systemd[1]: Created slice kubepods-burstable-poda321018dba2e12be498d6b5e7fbc6357.slice - libcontainer container kubepods-burstable-poda321018dba2e12be498d6b5e7fbc6357.slice. Jan 30 14:10:57.482962 kubelet[2830]: I0130 14:10:57.482856 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.482962 kubelet[2830]: E0130 14:10:57.482870 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-feecaa3039?timeout=10s\": dial tcp 139.178.70.199:6443: connect: connection refused" interval="400ms" Jan 30 14:10:57.482962 kubelet[2830]: I0130 14:10:57.482936 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1df8ce73b0f99eea3fd108b8a8e7bdfa-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" (UID: \"1df8ce73b0f99eea3fd108b8a8e7bdfa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.483397 kubelet[2830]: I0130 14:10:57.482993 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1df8ce73b0f99eea3fd108b8a8e7bdfa-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" (UID: \"1df8ce73b0f99eea3fd108b8a8e7bdfa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.483397 kubelet[2830]: I0130 14:10:57.483096 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1df8ce73b0f99eea3fd108b8a8e7bdfa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" (UID: \"1df8ce73b0f99eea3fd108b8a8e7bdfa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.483397 kubelet[2830]: I0130 14:10:57.483227 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.483397 kubelet[2830]: I0130 14:10:57.483341 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.483724 kubelet[2830]: I0130 14:10:57.483408 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.483724 kubelet[2830]: I0130 14:10:57.483456 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.483724 kubelet[2830]: I0130 14:10:57.483502 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a321018dba2e12be498d6b5e7fbc6357-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-feecaa3039\" (UID: \"a321018dba2e12be498d6b5e7fbc6357\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.591226 kubelet[2830]: I0130 14:10:57.590995 2830 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.591785 kubelet[2830]: E0130 14:10:57.591679 2830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.199:6443/api/v1/nodes\": dial tcp 139.178.70.199:6443: connect: connection refused" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.735157 containerd[1824]: time="2025-01-30T14:10:57.735013343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-feecaa3039,Uid:1df8ce73b0f99eea3fd108b8a8e7bdfa,Namespace:kube-system,Attempt:0,}" Jan 30 14:10:57.755523 containerd[1824]: time="2025-01-30T14:10:57.755491223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-feecaa3039,Uid:1398cca636796dfee51a63859d626320,Namespace:kube-system,Attempt:0,}" Jan 30 14:10:57.766059 containerd[1824]: time="2025-01-30T14:10:57.766042851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-feecaa3039,Uid:a321018dba2e12be498d6b5e7fbc6357,Namespace:kube-system,Attempt:0,}" Jan 30 14:10:57.884660 kubelet[2830]: E0130 14:10:57.884407 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-feecaa3039?timeout=10s\": dial tcp 139.178.70.199:6443: connect: connection refused" interval="800ms" Jan 30 14:10:57.996034 kubelet[2830]: I0130 14:10:57.995987 2830 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:57.996847 kubelet[2830]: E0130 14:10:57.996719 2830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.199:6443/api/v1/nodes\": dial tcp 139.178.70.199:6443: connect: connection refused" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:58.125550 kubelet[2830]: W0130 14:10:58.125496 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.125550 kubelet[2830]: E0130 14:10:58.125521 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.159945 kubelet[2830]: W0130 14:10:58.159658 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-feecaa3039&limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.159945 kubelet[2830]: E0130 14:10:58.159799 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-feecaa3039&limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.209455 kubelet[2830]: W0130 14:10:58.209385 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.209455 kubelet[2830]: E0130 14:10:58.209426 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.232621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552068811.mount: Deactivated successfully. Jan 30 14:10:58.234588 containerd[1824]: time="2025-01-30T14:10:58.234538964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:58.234698 containerd[1824]: time="2025-01-30T14:10:58.234670535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 14:10:58.235063 containerd[1824]: time="2025-01-30T14:10:58.235052516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:58.235559 containerd[1824]: time="2025-01-30T14:10:58.235539979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:10:58.235599 containerd[1824]: time="2025-01-30T14:10:58.235586405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:58.236040 containerd[1824]: time="2025-01-30T14:10:58.236023977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:10:58.236087 containerd[1824]: time="2025-01-30T14:10:58.236060194Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:58.238038 containerd[1824]: time="2025-01-30T14:10:58.238022146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:58.238861 containerd[1824]: time="2025-01-30T14:10:58.238846374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.59959ms" Jan 30 14:10:58.239214 containerd[1824]: time="2025-01-30T14:10:58.239183807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.632827ms" Jan 30 14:10:58.240737 containerd[1824]: time="2025-01-30T14:10:58.240721380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.644692ms" Jan 30 14:10:58.358140 containerd[1824]: time="2025-01-30T14:10:58.358084507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:58.358140 containerd[1824]: time="2025-01-30T14:10:58.358125499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:58.358140 containerd[1824]: time="2025-01-30T14:10:58.358135564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:58.358140 containerd[1824]: time="2025-01-30T14:10:58.357895667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358154717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358163950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358165808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358185209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358191818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358199058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358206706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:58.358293 containerd[1824]: time="2025-01-30T14:10:58.358243749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:58.381320 systemd[1]: Started cri-containerd-2255a721e326e46a12ef06bf11d1becca239ad519f363c6c39098e01818354ba.scope - libcontainer container 2255a721e326e46a12ef06bf11d1becca239ad519f363c6c39098e01818354ba. Jan 30 14:10:58.382075 systemd[1]: Started cri-containerd-c849f2ea27b20ea4e3b9614821c77e72f0b6314df3fad0863af427b90953f6bd.scope - libcontainer container c849f2ea27b20ea4e3b9614821c77e72f0b6314df3fad0863af427b90953f6bd. Jan 30 14:10:58.382841 systemd[1]: Started cri-containerd-f1c157ad2d57dd4324407799b8547d61f65447d9b1dc1362a79d08575ead2b28.scope - libcontainer container f1c157ad2d57dd4324407799b8547d61f65447d9b1dc1362a79d08575ead2b28. Jan 30 14:10:58.397680 kubelet[2830]: W0130 14:10:58.397634 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.397741 kubelet[2830]: E0130 14:10:58.397689 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.199:6443: connect: connection refused Jan 30 14:10:58.404820 containerd[1824]: time="2025-01-30T14:10:58.404793132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-feecaa3039,Uid:1df8ce73b0f99eea3fd108b8a8e7bdfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2255a721e326e46a12ef06bf11d1becca239ad519f363c6c39098e01818354ba\"" Jan 30 14:10:58.405676 containerd[1824]: time="2025-01-30T14:10:58.405654338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-feecaa3039,Uid:1398cca636796dfee51a63859d626320,Namespace:kube-system,Attempt:0,} returns sandbox id \"c849f2ea27b20ea4e3b9614821c77e72f0b6314df3fad0863af427b90953f6bd\"" Jan 30 14:10:58.406066 containerd[1824]: time="2025-01-30T14:10:58.406049649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-feecaa3039,Uid:a321018dba2e12be498d6b5e7fbc6357,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1c157ad2d57dd4324407799b8547d61f65447d9b1dc1362a79d08575ead2b28\"" Jan 30 14:10:58.407059 containerd[1824]: time="2025-01-30T14:10:58.407047124Z" level=info msg="CreateContainer within sandbox \"2255a721e326e46a12ef06bf11d1becca239ad519f363c6c39098e01818354ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:10:58.407090 containerd[1824]: time="2025-01-30T14:10:58.407081377Z" level=info msg="CreateContainer within sandbox \"f1c157ad2d57dd4324407799b8547d61f65447d9b1dc1362a79d08575ead2b28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:10:58.407117 containerd[1824]: time="2025-01-30T14:10:58.407050603Z" level=info msg="CreateContainer within sandbox \"c849f2ea27b20ea4e3b9614821c77e72f0b6314df3fad0863af427b90953f6bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:10:58.413553 containerd[1824]: time="2025-01-30T14:10:58.413478383Z" level=info msg="CreateContainer within sandbox \"2255a721e326e46a12ef06bf11d1becca239ad519f363c6c39098e01818354ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26ba106a5bc0234d5560807bfee401daa043832d76db18e0ee93dd0d9317c685\"" Jan 30 14:10:58.413863 containerd[1824]: time="2025-01-30T14:10:58.413813026Z" level=info msg="StartContainer for \"26ba106a5bc0234d5560807bfee401daa043832d76db18e0ee93dd0d9317c685\"" Jan 30 14:10:58.414360 containerd[1824]: time="2025-01-30T14:10:58.414317020Z" level=info msg="CreateContainer within sandbox \"f1c157ad2d57dd4324407799b8547d61f65447d9b1dc1362a79d08575ead2b28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5bd0412ef9e7b69a39aa53a4ea34c8aba2da90e677285c16b77af37a766a4356\"" Jan 30 14:10:58.414502 containerd[1824]: time="2025-01-30T14:10:58.414474921Z" level=info msg="StartContainer for \"5bd0412ef9e7b69a39aa53a4ea34c8aba2da90e677285c16b77af37a766a4356\"" Jan 30 14:10:58.415554 containerd[1824]: time="2025-01-30T14:10:58.415537314Z" level=info msg="CreateContainer within sandbox \"c849f2ea27b20ea4e3b9614821c77e72f0b6314df3fad0863af427b90953f6bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"81e81c3fcf943ff7da3bf50953b2505dd418ad7753be0aff7de8fc35ba209567\"" Jan 30 14:10:58.415772 containerd[1824]: time="2025-01-30T14:10:58.415758077Z" level=info msg="StartContainer for \"81e81c3fcf943ff7da3bf50953b2505dd418ad7753be0aff7de8fc35ba209567\"" Jan 30 14:10:58.444395 systemd[1]: Started cri-containerd-26ba106a5bc0234d5560807bfee401daa043832d76db18e0ee93dd0d9317c685.scope - libcontainer container 26ba106a5bc0234d5560807bfee401daa043832d76db18e0ee93dd0d9317c685. Jan 30 14:10:58.444921 systemd[1]: Started cri-containerd-5bd0412ef9e7b69a39aa53a4ea34c8aba2da90e677285c16b77af37a766a4356.scope - libcontainer container 5bd0412ef9e7b69a39aa53a4ea34c8aba2da90e677285c16b77af37a766a4356. Jan 30 14:10:58.445457 systemd[1]: Started cri-containerd-81e81c3fcf943ff7da3bf50953b2505dd418ad7753be0aff7de8fc35ba209567.scope - libcontainer container 81e81c3fcf943ff7da3bf50953b2505dd418ad7753be0aff7de8fc35ba209567. Jan 30 14:10:58.470564 containerd[1824]: time="2025-01-30T14:10:58.470511493Z" level=info msg="StartContainer for \"26ba106a5bc0234d5560807bfee401daa043832d76db18e0ee93dd0d9317c685\" returns successfully" Jan 30 14:10:58.471841 containerd[1824]: time="2025-01-30T14:10:58.471813395Z" level=info msg="StartContainer for \"81e81c3fcf943ff7da3bf50953b2505dd418ad7753be0aff7de8fc35ba209567\" returns successfully" Jan 30 14:10:58.471950 containerd[1824]: time="2025-01-30T14:10:58.471812853Z" level=info msg="StartContainer for \"5bd0412ef9e7b69a39aa53a4ea34c8aba2da90e677285c16b77af37a766a4356\" returns successfully" Jan 30 14:10:58.800710 kubelet[2830]: I0130 14:10:58.800694 2830 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:58.984109 kubelet[2830]: E0130 14:10:58.984083 2830 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-feecaa3039\" not found" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:59.099035 kubelet[2830]: I0130 14:10:59.098804 2830 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:10:59.119521 kubelet[2830]: E0130 14:10:59.119432 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.220697 kubelet[2830]: E0130 14:10:59.220599 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.320922 kubelet[2830]: E0130 14:10:59.320811 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.421506 kubelet[2830]: E0130 14:10:59.421255 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.521748 kubelet[2830]: E0130 14:10:59.521636 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.621993 kubelet[2830]: E0130 14:10:59.621879 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.723075 kubelet[2830]: E0130 14:10:59.722976 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.824262 kubelet[2830]: E0130 14:10:59.824153 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:10:59.925386 kubelet[2830]: E0130 14:10:59.925264 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:11:00.026410 kubelet[2830]: E0130 14:11:00.026198 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:11:00.127156 kubelet[2830]: E0130 14:11:00.127063 2830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:11:00.273524 kubelet[2830]: I0130 14:11:00.273464 2830 apiserver.go:52] "Watching apiserver" Jan 30 14:11:00.282474 kubelet[2830]: I0130 14:11:00.282270 2830 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:11:00.325557 kubelet[2830]: W0130 14:11:00.325473 2830 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:11:01.343064 systemd[1]: Reloading requested from client PID 3148 ('systemctl') (unit session-11.scope)... Jan 30 14:11:01.343072 systemd[1]: Reloading... Jan 30 14:11:01.381175 zram_generator::config[3187]: No configuration found. Jan 30 14:11:01.458660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:11:01.525810 systemd[1]: Reloading finished in 182 ms. Jan 30 14:11:01.558219 kubelet[2830]: I0130 14:11:01.558176 2830 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:11:01.558209 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:01.562922 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:11:01.563030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:01.575513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:01.796176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:01.798545 (kubelet)[3251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:11:01.821963 kubelet[3251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:11:01.821963 kubelet[3251]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:11:01.821963 kubelet[3251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:11:01.822226 kubelet[3251]: I0130 14:11:01.821992 3251 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:11:01.824486 kubelet[3251]: I0130 14:11:01.824445 3251 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:11:01.824486 kubelet[3251]: I0130 14:11:01.824458 3251 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:11:01.824603 kubelet[3251]: I0130 14:11:01.824572 3251 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:11:01.825365 kubelet[3251]: I0130 14:11:01.825328 3251 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:11:01.825991 kubelet[3251]: I0130 14:11:01.825979 3251 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:11:01.835649 kubelet[3251]: I0130 14:11:01.835606 3251 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:11:01.835742 kubelet[3251]: I0130 14:11:01.835724 3251 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:11:01.835868 kubelet[3251]: I0130 14:11:01.835743 3251 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-feecaa3039","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:11:01.835868 kubelet[3251]: I0130 14:11:01.835853 3251 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:11:01.835868 kubelet[3251]: I0130 14:11:01.835860 3251 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:11:01.835964 kubelet[3251]: I0130 14:11:01.835884 3251 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:11:01.835964 kubelet[3251]: I0130 14:11:01.835938 3251 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:11:01.835964 kubelet[3251]: I0130 14:11:01.835944 3251 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:11:01.835964 kubelet[3251]: I0130 14:11:01.835957 3251 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:11:01.836032 kubelet[3251]: I0130 14:11:01.835966 3251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:11:01.836448 kubelet[3251]: I0130 14:11:01.836415 3251 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:11:01.836526 kubelet[3251]: I0130 14:11:01.836518 3251 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:11:01.836783 kubelet[3251]: I0130 14:11:01.836775 3251 server.go:1264] "Started kubelet" Jan 30 14:11:01.836822 kubelet[3251]: I0130 14:11:01.836809 3251 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:11:01.837370 kubelet[3251]: I0130 14:11:01.836843 3251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:11:01.837568 kubelet[3251]: I0130 14:11:01.837556 3251 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:11:01.837758 kubelet[3251]: I0130 14:11:01.837749 3251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:11:01.838548 kubelet[3251]: I0130 14:11:01.837830 3251 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:11:01.838548 kubelet[3251]: E0130 14:11:01.837850 3251 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-feecaa3039\" not found" Jan 30 14:11:01.838548 kubelet[3251]: I0130 14:11:01.837883 3251 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:11:01.839194 kubelet[3251]: I0130 14:11:01.839166 3251 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:11:01.839526 kubelet[3251]: I0130 14:11:01.839513 3251 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:11:01.839747 kubelet[3251]: I0130 14:11:01.839516 3251 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:11:01.840085 kubelet[3251]: I0130 14:11:01.839838 3251 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:11:01.840582 kubelet[3251]: E0130 14:11:01.840568 3251 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:11:01.841470 kubelet[3251]: I0130 14:11:01.841458 3251 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:11:01.844788 kubelet[3251]: I0130 14:11:01.844768 3251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:11:01.845337 kubelet[3251]: I0130 14:11:01.845328 3251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:11:01.845374 kubelet[3251]: I0130 14:11:01.845354 3251 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:11:01.845374 kubelet[3251]: I0130 14:11:01.845368 3251 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:11:01.845421 kubelet[3251]: E0130 14:11:01.845400 3251 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:11:01.856927 kubelet[3251]: I0130 14:11:01.856913 3251 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:11:01.856927 kubelet[3251]: I0130 14:11:01.856923 3251 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:11:01.856927 kubelet[3251]: I0130 14:11:01.856934 3251 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:11:01.857046 kubelet[3251]: I0130 14:11:01.857020 3251 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:11:01.857046 kubelet[3251]: I0130 14:11:01.857027 3251 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:11:01.857046 kubelet[3251]: I0130 14:11:01.857038 3251 policy_none.go:49] "None policy: Start" Jan 30 14:11:01.857384 kubelet[3251]: I0130 14:11:01.857347 3251 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:11:01.857384 kubelet[3251]: I0130 14:11:01.857357 3251 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:11:01.857487 kubelet[3251]: I0130 14:11:01.857449 3251 state_mem.go:75] "Updated machine memory state" Jan 30 14:11:01.859545 kubelet[3251]: I0130 14:11:01.859507 3251 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:11:01.859622 kubelet[3251]: I0130 14:11:01.859602 3251 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:11:01.859662 kubelet[3251]: I0130 14:11:01.859656 3251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:11:01.945285 kubelet[3251]: I0130 14:11:01.945213 3251 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:01.945686 kubelet[3251]: I0130 14:11:01.945580 3251 topology_manager.go:215] "Topology Admit Handler" podUID="1df8ce73b0f99eea3fd108b8a8e7bdfa" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:01.945948 kubelet[3251]: I0130 14:11:01.945770 3251 topology_manager.go:215] "Topology Admit Handler" podUID="1398cca636796dfee51a63859d626320" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:01.945948 kubelet[3251]: I0130 14:11:01.945940 3251 topology_manager.go:215] "Topology Admit Handler" podUID="a321018dba2e12be498d6b5e7fbc6357" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:01.953163 kubelet[3251]: W0130 14:11:01.953141 3251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:11:01.953163 kubelet[3251]: W0130 14:11:01.953141 3251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:11:01.953163 kubelet[3251]: I0130 14:11:01.953170 3251 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:01.953331 kubelet[3251]: W0130 14:11:01.953240 3251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:11:01.953331 kubelet[3251]: I0130 14:11:01.953249 3251 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:01.953331 kubelet[3251]: E0130 14:11:01.953275 3251 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.041431 kubelet[3251]: I0130 14:11:02.041316 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a321018dba2e12be498d6b5e7fbc6357-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-feecaa3039\" (UID: \"a321018dba2e12be498d6b5e7fbc6357\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.041431 kubelet[3251]: I0130 14:11:02.041415 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1df8ce73b0f99eea3fd108b8a8e7bdfa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" (UID: \"1df8ce73b0f99eea3fd108b8a8e7bdfa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.041818 kubelet[3251]: I0130 14:11:02.041493 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.041818 kubelet[3251]: I0130 14:11:02.041602 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.041818 kubelet[3251]: I0130 14:11:02.041698 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.041818 kubelet[3251]: I0130 14:11:02.041762 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1df8ce73b0f99eea3fd108b8a8e7bdfa-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" (UID: \"1df8ce73b0f99eea3fd108b8a8e7bdfa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.042248 kubelet[3251]: I0130 14:11:02.041843 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1df8ce73b0f99eea3fd108b8a8e7bdfa-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" (UID: \"1df8ce73b0f99eea3fd108b8a8e7bdfa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.042248 kubelet[3251]: I0130 14:11:02.041905 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.042248 kubelet[3251]: I0130 14:11:02.042007 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1398cca636796dfee51a63859d626320-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" (UID: \"1398cca636796dfee51a63859d626320\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.836698 kubelet[3251]: I0130 14:11:02.836637 3251 apiserver.go:52] "Watching apiserver" Jan 30 14:11:02.838060 kubelet[3251]: I0130 14:11:02.838015 3251 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:11:02.855409 kubelet[3251]: W0130 14:11:02.855336 3251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:11:02.855409 kubelet[3251]: E0130 14:11:02.855409 3251 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-feecaa3039\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.855896 kubelet[3251]: W0130 14:11:02.855832 3251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:11:02.855999 kubelet[3251]: E0130 14:11:02.855931 3251 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-feecaa3039\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" Jan 30 14:11:02.868618 kubelet[3251]: I0130 14:11:02.868523 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-feecaa3039" podStartSLOduration=1.8684958790000001 podStartE2EDuration="1.868495879s" podCreationTimestamp="2025-01-30 14:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:02.868379391 +0000 UTC m=+1.067825271" watchObservedRunningTime="2025-01-30 14:11:02.868495879 +0000 UTC m=+1.067941756" Jan 30 14:11:02.888040 kubelet[3251]: I0130 14:11:02.887995 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-feecaa3039" podStartSLOduration=1.8879797699999998 podStartE2EDuration="1.88797977s" podCreationTimestamp="2025-01-30 14:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:02.88797567 +0000 UTC m=+1.087421548" watchObservedRunningTime="2025-01-30 14:11:02.88797977 +0000 UTC m=+1.087425646" Jan 30 14:11:02.892437 kubelet[3251]: I0130 14:11:02.892407 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-feecaa3039" podStartSLOduration=2.8923947180000003 podStartE2EDuration="2.892394718s" podCreationTimestamp="2025-01-30 14:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:02.892385596 +0000 UTC m=+1.091831477" watchObservedRunningTime="2025-01-30 14:11:02.892394718 +0000 UTC m=+1.091840603" Jan 30 14:11:06.161480 sudo[2089]: pam_unix(sudo:session): session closed for user root Jan 30 14:11:06.162341 sshd[2086]: pam_unix(sshd:session): session closed for user core Jan 30 14:11:06.164397 systemd[1]: sshd@8-139.178.70.199:22-147.75.109.163:59216.service: Deactivated successfully. Jan 30 14:11:06.165206 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:11:06.165291 systemd[1]: session-11.scope: Consumed 4.488s CPU time, 200.0M memory peak, 0B memory swap peak. Jan 30 14:11:06.165688 systemd-logind[1806]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:11:06.166359 systemd-logind[1806]: Removed session 11. Jan 30 14:11:09.458380 systemd[1]: Started sshd@9-139.178.70.199:22-218.92.0.155:53684.service - OpenSSH per-connection server daemon (218.92.0.155:53684). Jan 30 14:11:10.512584 sshd[3416]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:11:12.251477 sshd[3414]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:11:12.528624 sshd[3417]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:11:14.207379 sshd[3414]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:11:14.485175 sshd[3418]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:11:14.541170 update_engine[1811]: I20250130 14:11:14.541122 1811 update_attempter.cc:509] Updating boot flags... Jan 30 14:11:14.576152 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3428) Jan 30 14:11:14.603114 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3427) Jan 30 14:11:14.630141 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3427) Jan 30 14:11:14.716716 kubelet[3251]: I0130 14:11:14.716648 3251 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:11:14.717772 kubelet[3251]: I0130 14:11:14.717732 3251 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:11:14.717936 containerd[1824]: time="2025-01-30T14:11:14.717307637Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:11:15.674910 kubelet[3251]: I0130 14:11:15.674826 3251 topology_manager.go:215] "Topology Admit Handler" podUID="2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c" podNamespace="kube-system" podName="kube-proxy-f8vtz" Jan 30 14:11:15.690333 systemd[1]: Created slice kubepods-besteffort-pod2ea0ab50_8368_42b2_bc8e_2e62fcd1b52c.slice - libcontainer container kubepods-besteffort-pod2ea0ab50_8368_42b2_bc8e_2e62fcd1b52c.slice. Jan 30 14:11:15.725913 kubelet[3251]: I0130 14:11:15.725879 3251 topology_manager.go:215] "Topology Admit Handler" podUID="3a35b042-640c-4e7a-9ca8-38f8d8b3a3ed" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-bwh42" Jan 30 14:11:15.729973 systemd[1]: Created slice kubepods-besteffort-pod3a35b042_640c_4e7a_9ca8_38f8d8b3a3ed.slice - libcontainer container kubepods-besteffort-pod3a35b042_640c_4e7a_9ca8_38f8d8b3a3ed.slice. Jan 30 14:11:15.748445 kubelet[3251]: I0130 14:11:15.748392 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4vbb\" (UniqueName: \"kubernetes.io/projected/3a35b042-640c-4e7a-9ca8-38f8d8b3a3ed-kube-api-access-m4vbb\") pod \"tigera-operator-7bc55997bb-bwh42\" (UID: \"3a35b042-640c-4e7a-9ca8-38f8d8b3a3ed\") " pod="tigera-operator/tigera-operator-7bc55997bb-bwh42" Jan 30 14:11:15.748445 kubelet[3251]: I0130 14:11:15.748423 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c-kube-proxy\") pod \"kube-proxy-f8vtz\" (UID: \"2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c\") " pod="kube-system/kube-proxy-f8vtz" Jan 30 14:11:15.748445 kubelet[3251]: I0130 14:11:15.748436 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c-xtables-lock\") pod \"kube-proxy-f8vtz\" (UID: \"2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c\") " pod="kube-system/kube-proxy-f8vtz" Jan 30 14:11:15.748445 kubelet[3251]: I0130 14:11:15.748446 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c-lib-modules\") pod \"kube-proxy-f8vtz\" (UID: \"2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c\") " pod="kube-system/kube-proxy-f8vtz" Jan 30 14:11:15.748615 kubelet[3251]: I0130 14:11:15.748457 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2427\" (UniqueName: \"kubernetes.io/projected/2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c-kube-api-access-k2427\") pod \"kube-proxy-f8vtz\" (UID: \"2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c\") " pod="kube-system/kube-proxy-f8vtz" Jan 30 14:11:15.748615 kubelet[3251]: I0130 14:11:15.748469 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a35b042-640c-4e7a-9ca8-38f8d8b3a3ed-var-lib-calico\") pod \"tigera-operator-7bc55997bb-bwh42\" (UID: \"3a35b042-640c-4e7a-9ca8-38f8d8b3a3ed\") " pod="tigera-operator/tigera-operator-7bc55997bb-bwh42" Jan 30 14:11:16.013465 containerd[1824]: time="2025-01-30T14:11:16.013377380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8vtz,Uid:2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c,Namespace:kube-system,Attempt:0,}" Jan 30 14:11:16.025536 containerd[1824]: time="2025-01-30T14:11:16.025481515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:16.025621 containerd[1824]: time="2025-01-30T14:11:16.025523223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:16.025621 containerd[1824]: time="2025-01-30T14:11:16.025564475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:16.025803 containerd[1824]: time="2025-01-30T14:11:16.025785087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:16.032064 containerd[1824]: time="2025-01-30T14:11:16.032040162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-bwh42,Uid:3a35b042-640c-4e7a-9ca8-38f8d8b3a3ed,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:11:16.042236 containerd[1824]: time="2025-01-30T14:11:16.042099082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:16.042236 containerd[1824]: time="2025-01-30T14:11:16.042168578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:16.042236 containerd[1824]: time="2025-01-30T14:11:16.042190544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:16.042397 containerd[1824]: time="2025-01-30T14:11:16.042250522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:16.050362 systemd[1]: Started cri-containerd-9f96c90a2554eba97b87c78d96f58e9d9a9aef679ac989032b2ea33bd4df9c3f.scope - libcontainer container 9f96c90a2554eba97b87c78d96f58e9d9a9aef679ac989032b2ea33bd4df9c3f. Jan 30 14:11:16.055059 systemd[1]: Started cri-containerd-c004caaa0e2b0eab0349076ef6aea374b998e035f24aa0e70658b0ac3ae049be.scope - libcontainer container c004caaa0e2b0eab0349076ef6aea374b998e035f24aa0e70658b0ac3ae049be. Jan 30 14:11:16.061969 containerd[1824]: time="2025-01-30T14:11:16.061950658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8vtz,Uid:2ea0ab50-8368-42b2-bc8e-2e62fcd1b52c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f96c90a2554eba97b87c78d96f58e9d9a9aef679ac989032b2ea33bd4df9c3f\"" Jan 30 14:11:16.063255 containerd[1824]: time="2025-01-30T14:11:16.063235140Z" level=info msg="CreateContainer within sandbox \"9f96c90a2554eba97b87c78d96f58e9d9a9aef679ac989032b2ea33bd4df9c3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:11:16.069011 containerd[1824]: time="2025-01-30T14:11:16.068991532Z" level=info msg="CreateContainer within sandbox \"9f96c90a2554eba97b87c78d96f58e9d9a9aef679ac989032b2ea33bd4df9c3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02f5f9f8cb7b25f963933888c083e964354c2f560e1a5dc2e965407da7be37c1\"" Jan 30 14:11:16.069577 containerd[1824]: time="2025-01-30T14:11:16.069496596Z" level=info msg="StartContainer for \"02f5f9f8cb7b25f963933888c083e964354c2f560e1a5dc2e965407da7be37c1\"" Jan 30 14:11:16.080032 containerd[1824]: time="2025-01-30T14:11:16.079993754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-bwh42,Uid:3a35b042-640c-4e7a-9ca8-38f8d8b3a3ed,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c004caaa0e2b0eab0349076ef6aea374b998e035f24aa0e70658b0ac3ae049be\"" Jan 30 14:11:16.080937 containerd[1824]: time="2025-01-30T14:11:16.080919575Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:11:16.096228 systemd[1]: Started cri-containerd-02f5f9f8cb7b25f963933888c083e964354c2f560e1a5dc2e965407da7be37c1.scope - libcontainer container 02f5f9f8cb7b25f963933888c083e964354c2f560e1a5dc2e965407da7be37c1. Jan 30 14:11:16.109090 containerd[1824]: time="2025-01-30T14:11:16.109062278Z" level=info msg="StartContainer for \"02f5f9f8cb7b25f963933888c083e964354c2f560e1a5dc2e965407da7be37c1\" returns successfully" Jan 30 14:11:16.902469 kubelet[3251]: I0130 14:11:16.902427 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f8vtz" podStartSLOduration=1.9024146769999999 podStartE2EDuration="1.902414677s" podCreationTimestamp="2025-01-30 14:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:16.902363633 +0000 UTC m=+15.101809511" watchObservedRunningTime="2025-01-30 14:11:16.902414677 +0000 UTC m=+15.101860552" Jan 30 14:11:17.107831 sshd[3414]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:11:17.244831 sshd[3414]: Received disconnect from 218.92.0.155 port 53684:11: [preauth] Jan 30 14:11:17.244831 sshd[3414]: Disconnected from authenticating user root 218.92.0.155 port 53684 [preauth] Jan 30 14:11:17.245613 systemd[1]: sshd@9-139.178.70.199:22-218.92.0.155:53684.service: Deactivated successfully. Jan 30 14:11:17.417908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648701206.mount: Deactivated successfully. Jan 30 14:11:17.625326 containerd[1824]: time="2025-01-30T14:11:17.625262467Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:17.625549 containerd[1824]: time="2025-01-30T14:11:17.625480704Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 14:11:17.625857 containerd[1824]: time="2025-01-30T14:11:17.625845815Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:17.626867 containerd[1824]: time="2025-01-30T14:11:17.626825394Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:17.627343 containerd[1824]: time="2025-01-30T14:11:17.627299815Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.546357786s" Jan 30 14:11:17.627343 containerd[1824]: time="2025-01-30T14:11:17.627315993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 14:11:17.628375 containerd[1824]: time="2025-01-30T14:11:17.628334917Z" level=info msg="CreateContainer within sandbox \"c004caaa0e2b0eab0349076ef6aea374b998e035f24aa0e70658b0ac3ae049be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:11:17.632237 containerd[1824]: time="2025-01-30T14:11:17.632193531Z" level=info msg="CreateContainer within sandbox \"c004caaa0e2b0eab0349076ef6aea374b998e035f24aa0e70658b0ac3ae049be\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d0bd30a8167482990b71d9fba7fd7a866a594064a54b59b9429932a40a0ac109\"" Jan 30 14:11:17.632402 containerd[1824]: time="2025-01-30T14:11:17.632384659Z" level=info msg="StartContainer for \"d0bd30a8167482990b71d9fba7fd7a866a594064a54b59b9429932a40a0ac109\"" Jan 30 14:11:17.658640 systemd[1]: Started cri-containerd-d0bd30a8167482990b71d9fba7fd7a866a594064a54b59b9429932a40a0ac109.scope - libcontainer container d0bd30a8167482990b71d9fba7fd7a866a594064a54b59b9429932a40a0ac109. Jan 30 14:11:17.710014 containerd[1824]: time="2025-01-30T14:11:17.709970395Z" level=info msg="StartContainer for \"d0bd30a8167482990b71d9fba7fd7a866a594064a54b59b9429932a40a0ac109\" returns successfully" Jan 30 14:11:17.904692 kubelet[3251]: I0130 14:11:17.904460 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-bwh42" podStartSLOduration=1.357383306 podStartE2EDuration="2.90442445s" podCreationTimestamp="2025-01-30 14:11:15 +0000 UTC" firstStartedPulling="2025-01-30 14:11:16.080687864 +0000 UTC m=+14.280133743" lastFinishedPulling="2025-01-30 14:11:17.627729008 +0000 UTC m=+15.827174887" observedRunningTime="2025-01-30 14:11:17.903958086 +0000 UTC m=+16.103404034" watchObservedRunningTime="2025-01-30 14:11:17.90442445 +0000 UTC m=+16.103870376" Jan 30 14:11:20.760502 kubelet[3251]: I0130 14:11:20.760423 3251 topology_manager.go:215] "Topology Admit Handler" podUID="a6045b14-3089-4015-89cb-25c59b832f86" podNamespace="calico-system" podName="calico-typha-867855fc8-7hfd5" Jan 30 14:11:20.775798 systemd[1]: Created slice kubepods-besteffort-poda6045b14_3089_4015_89cb_25c59b832f86.slice - libcontainer container kubepods-besteffort-poda6045b14_3089_4015_89cb_25c59b832f86.slice. Jan 30 14:11:20.788068 kubelet[3251]: I0130 14:11:20.788020 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6045b14-3089-4015-89cb-25c59b832f86-tigera-ca-bundle\") pod \"calico-typha-867855fc8-7hfd5\" (UID: \"a6045b14-3089-4015-89cb-25c59b832f86\") " pod="calico-system/calico-typha-867855fc8-7hfd5" Jan 30 14:11:20.788245 kubelet[3251]: I0130 14:11:20.788087 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2vx\" (UniqueName: \"kubernetes.io/projected/a6045b14-3089-4015-89cb-25c59b832f86-kube-api-access-nk2vx\") pod \"calico-typha-867855fc8-7hfd5\" (UID: \"a6045b14-3089-4015-89cb-25c59b832f86\") " pod="calico-system/calico-typha-867855fc8-7hfd5" Jan 30 14:11:20.788245 kubelet[3251]: I0130 14:11:20.788138 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6045b14-3089-4015-89cb-25c59b832f86-typha-certs\") pod \"calico-typha-867855fc8-7hfd5\" (UID: \"a6045b14-3089-4015-89cb-25c59b832f86\") " pod="calico-system/calico-typha-867855fc8-7hfd5" Jan 30 14:11:20.790127 kubelet[3251]: I0130 14:11:20.790095 3251 topology_manager.go:215] "Topology Admit Handler" podUID="e813f197-27e3-44ed-9ab5-464364170362" podNamespace="calico-system" podName="calico-node-hvw2w" Jan 30 14:11:20.794716 systemd[1]: Created slice kubepods-besteffort-pode813f197_27e3_44ed_9ab5_464364170362.slice - libcontainer container kubepods-besteffort-pode813f197_27e3_44ed_9ab5_464364170362.slice. Jan 30 14:11:20.888802 kubelet[3251]: I0130 14:11:20.888702 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-xtables-lock\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889208 kubelet[3251]: I0130 14:11:20.888881 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-cni-net-dir\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889208 kubelet[3251]: I0130 14:11:20.889021 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e813f197-27e3-44ed-9ab5-464364170362-node-certs\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889208 kubelet[3251]: I0130 14:11:20.889136 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-policysync\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889790 kubelet[3251]: I0130 14:11:20.889236 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-flexvol-driver-host\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889790 kubelet[3251]: I0130 14:11:20.889340 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-lib-modules\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889790 kubelet[3251]: I0130 14:11:20.889408 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-var-run-calico\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889790 kubelet[3251]: I0130 14:11:20.889468 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbln\" (UniqueName: \"kubernetes.io/projected/e813f197-27e3-44ed-9ab5-464364170362-kube-api-access-ftbln\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.889790 kubelet[3251]: I0130 14:11:20.889688 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-var-lib-calico\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.890378 kubelet[3251]: I0130 14:11:20.889780 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-cni-log-dir\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.890378 kubelet[3251]: I0130 14:11:20.889855 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e813f197-27e3-44ed-9ab5-464364170362-tigera-ca-bundle\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.890378 kubelet[3251]: I0130 14:11:20.889941 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e813f197-27e3-44ed-9ab5-464364170362-cni-bin-dir\") pod \"calico-node-hvw2w\" (UID: \"e813f197-27e3-44ed-9ab5-464364170362\") " pod="calico-system/calico-node-hvw2w" Jan 30 14:11:20.936687 kubelet[3251]: I0130 14:11:20.936638 3251 topology_manager.go:215] "Topology Admit Handler" podUID="7f648eb2-8d49-44e7-a889-00115811af73" podNamespace="calico-system" podName="csi-node-driver-gwmxx" Jan 30 14:11:20.937059 kubelet[3251]: E0130 14:11:20.937031 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwmxx" podUID="7f648eb2-8d49-44e7-a889-00115811af73" Jan 30 14:11:20.990542 kubelet[3251]: I0130 14:11:20.990461 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f648eb2-8d49-44e7-a889-00115811af73-kubelet-dir\") pod \"csi-node-driver-gwmxx\" (UID: \"7f648eb2-8d49-44e7-a889-00115811af73\") " pod="calico-system/csi-node-driver-gwmxx" Jan 30 14:11:20.990931 kubelet[3251]: I0130 14:11:20.990854 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f648eb2-8d49-44e7-a889-00115811af73-varrun\") pod \"csi-node-driver-gwmxx\" (UID: \"7f648eb2-8d49-44e7-a889-00115811af73\") " pod="calico-system/csi-node-driver-gwmxx" Jan 30 14:11:20.991223 kubelet[3251]: I0130 14:11:20.990989 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f648eb2-8d49-44e7-a889-00115811af73-socket-dir\") pod \"csi-node-driver-gwmxx\" (UID: \"7f648eb2-8d49-44e7-a889-00115811af73\") " pod="calico-system/csi-node-driver-gwmxx" Jan 30 14:11:20.991223 kubelet[3251]: I0130 14:11:20.991139 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kx59\" (UniqueName: \"kubernetes.io/projected/7f648eb2-8d49-44e7-a889-00115811af73-kube-api-access-7kx59\") pod \"csi-node-driver-gwmxx\" (UID: \"7f648eb2-8d49-44e7-a889-00115811af73\") " pod="calico-system/csi-node-driver-gwmxx" Jan 30 14:11:20.991895 kubelet[3251]: E0130 14:11:20.991813 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.991895 kubelet[3251]: W0130 14:11:20.991859 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.991895 kubelet[3251]: E0130 14:11:20.991900 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.992688 kubelet[3251]: E0130 14:11:20.992602 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.992688 kubelet[3251]: W0130 14:11:20.992642 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.992688 kubelet[3251]: E0130 14:11:20.992689 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.993379 kubelet[3251]: E0130 14:11:20.993319 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.993379 kubelet[3251]: W0130 14:11:20.993363 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.993666 kubelet[3251]: E0130 14:11:20.993413 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.993995 kubelet[3251]: E0130 14:11:20.993917 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.993995 kubelet[3251]: W0130 14:11:20.993956 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.993995 kubelet[3251]: E0130 14:11:20.994001 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.994763 kubelet[3251]: E0130 14:11:20.994684 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.994763 kubelet[3251]: W0130 14:11:20.994722 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.995039 kubelet[3251]: E0130 14:11:20.994821 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.995440 kubelet[3251]: E0130 14:11:20.995361 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.995440 kubelet[3251]: W0130 14:11:20.995399 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.995881 kubelet[3251]: E0130 14:11:20.995487 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.995881 kubelet[3251]: I0130 14:11:20.995594 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f648eb2-8d49-44e7-a889-00115811af73-registration-dir\") pod \"csi-node-driver-gwmxx\" (UID: \"7f648eb2-8d49-44e7-a889-00115811af73\") " pod="calico-system/csi-node-driver-gwmxx" Jan 30 14:11:20.996072 kubelet[3251]: E0130 14:11:20.996022 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.996072 kubelet[3251]: W0130 14:11:20.996050 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.996332 kubelet[3251]: E0130 14:11:20.996148 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.996828 kubelet[3251]: E0130 14:11:20.996749 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.996828 kubelet[3251]: W0130 14:11:20.996788 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.997101 kubelet[3251]: E0130 14:11:20.996874 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.997454 kubelet[3251]: E0130 14:11:20.997401 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.997454 kubelet[3251]: W0130 14:11:20.997442 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.997880 kubelet[3251]: E0130 14:11:20.997497 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.998067 kubelet[3251]: E0130 14:11:20.998047 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.998310 kubelet[3251]: W0130 14:11:20.998074 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.998310 kubelet[3251]: E0130 14:11:20.998177 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.998803 kubelet[3251]: E0130 14:11:20.998718 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.998803 kubelet[3251]: W0130 14:11:20.998766 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.999239 kubelet[3251]: E0130 14:11:20.998878 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:20.999485 kubelet[3251]: E0130 14:11:20.999435 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:20.999485 kubelet[3251]: W0130 14:11:20.999476 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:20.999859 kubelet[3251]: E0130 14:11:20.999551 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.000044 kubelet[3251]: E0130 14:11:21.000003 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.000044 kubelet[3251]: W0130 14:11:21.000039 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.000323 kubelet[3251]: E0130 14:11:21.000133 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.000662 kubelet[3251]: E0130 14:11:21.000585 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.000662 kubelet[3251]: W0130 14:11:21.000622 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.000939 kubelet[3251]: E0130 14:11:21.000691 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.001094 kubelet[3251]: E0130 14:11:21.001063 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.001094 kubelet[3251]: W0130 14:11:21.001089 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.001502 kubelet[3251]: E0130 14:11:21.001183 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.001725 kubelet[3251]: E0130 14:11:21.001681 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.001725 kubelet[3251]: W0130 14:11:21.001719 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.002051 kubelet[3251]: E0130 14:11:21.001816 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.002301 kubelet[3251]: E0130 14:11:21.002239 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.002301 kubelet[3251]: W0130 14:11:21.002274 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.002661 kubelet[3251]: E0130 14:11:21.002380 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.002932 kubelet[3251]: E0130 14:11:21.002884 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.002932 kubelet[3251]: W0130 14:11:21.002922 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.003341 kubelet[3251]: E0130 14:11:21.003026 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.003515 kubelet[3251]: E0130 14:11:21.003480 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.003515 kubelet[3251]: W0130 14:11:21.003506 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.003883 kubelet[3251]: E0130 14:11:21.003586 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.004296 kubelet[3251]: E0130 14:11:21.004250 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.004296 kubelet[3251]: W0130 14:11:21.004278 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.004555 kubelet[3251]: E0130 14:11:21.004358 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.004866 kubelet[3251]: E0130 14:11:21.004808 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.004866 kubelet[3251]: W0130 14:11:21.004844 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.005132 kubelet[3251]: E0130 14:11:21.004920 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.005501 kubelet[3251]: E0130 14:11:21.005424 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.005501 kubelet[3251]: W0130 14:11:21.005463 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.005778 kubelet[3251]: E0130 14:11:21.005554 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.006192 kubelet[3251]: E0130 14:11:21.006138 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.006192 kubelet[3251]: W0130 14:11:21.006176 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.006671 kubelet[3251]: E0130 14:11:21.006264 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.006906 kubelet[3251]: E0130 14:11:21.006740 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.006906 kubelet[3251]: W0130 14:11:21.006773 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.006906 kubelet[3251]: E0130 14:11:21.006852 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.007462 kubelet[3251]: E0130 14:11:21.007381 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.007462 kubelet[3251]: W0130 14:11:21.007415 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.007820 kubelet[3251]: E0130 14:11:21.007494 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.008039 kubelet[3251]: E0130 14:11:21.007941 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.008039 kubelet[3251]: W0130 14:11:21.007963 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.008416 kubelet[3251]: E0130 14:11:21.008049 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.008618 kubelet[3251]: E0130 14:11:21.008509 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.008618 kubelet[3251]: W0130 14:11:21.008532 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.008970 kubelet[3251]: E0130 14:11:21.008645 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.009186 kubelet[3251]: E0130 14:11:21.009019 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.009186 kubelet[3251]: W0130 14:11:21.009043 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.009186 kubelet[3251]: E0130 14:11:21.009161 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.009720 kubelet[3251]: E0130 14:11:21.009593 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.009720 kubelet[3251]: W0130 14:11:21.009628 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.009720 kubelet[3251]: E0130 14:11:21.009670 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.010278 kubelet[3251]: E0130 14:11:21.010095 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.010278 kubelet[3251]: W0130 14:11:21.010158 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.010754 kubelet[3251]: E0130 14:11:21.010274 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.010754 kubelet[3251]: E0130 14:11:21.010717 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.011166 kubelet[3251]: W0130 14:11:21.010760 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.011166 kubelet[3251]: E0130 14:11:21.010839 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.011549 kubelet[3251]: E0130 14:11:21.011443 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.011549 kubelet[3251]: W0130 14:11:21.011482 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.011825 kubelet[3251]: E0130 14:11:21.011572 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.012208 kubelet[3251]: E0130 14:11:21.012167 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.012208 kubelet[3251]: W0130 14:11:21.012197 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.012460 kubelet[3251]: E0130 14:11:21.012272 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.012746 kubelet[3251]: E0130 14:11:21.012675 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.012746 kubelet[3251]: W0130 14:11:21.012710 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.013006 kubelet[3251]: E0130 14:11:21.012784 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.013400 kubelet[3251]: E0130 14:11:21.013319 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.013400 kubelet[3251]: W0130 14:11:21.013347 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.013717 kubelet[3251]: E0130 14:11:21.013419 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.013976 kubelet[3251]: E0130 14:11:21.013930 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.013976 kubelet[3251]: W0130 14:11:21.013965 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.014241 kubelet[3251]: E0130 14:11:21.014001 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.081502 containerd[1824]: time="2025-01-30T14:11:21.081382927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867855fc8-7hfd5,Uid:a6045b14-3089-4015-89cb-25c59b832f86,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:21.092599 containerd[1824]: time="2025-01-30T14:11:21.092323788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:21.092599 containerd[1824]: time="2025-01-30T14:11:21.092542856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:21.092599 containerd[1824]: time="2025-01-30T14:11:21.092551835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:21.092709 containerd[1824]: time="2025-01-30T14:11:21.092593375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:21.097035 containerd[1824]: time="2025-01-30T14:11:21.097012784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hvw2w,Uid:e813f197-27e3-44ed-9ab5-464364170362,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:21.105277 systemd[1]: Started cri-containerd-2ba9b7b592c5ccf592a07db01123a349a2b0bdd4554e0d0fecbd13cf10e38c12.scope - libcontainer container 2ba9b7b592c5ccf592a07db01123a349a2b0bdd4554e0d0fecbd13cf10e38c12. Jan 30 14:11:21.105976 kubelet[3251]: E0130 14:11:21.105935 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.105976 kubelet[3251]: W0130 14:11:21.105947 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.105976 kubelet[3251]: E0130 14:11:21.105958 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.106077 containerd[1824]: time="2025-01-30T14:11:21.106029195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:21.106077 containerd[1824]: time="2025-01-30T14:11:21.106066614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:21.106136 containerd[1824]: time="2025-01-30T14:11:21.106073814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:21.106163 kubelet[3251]: E0130 14:11:21.106081 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.106163 kubelet[3251]: W0130 14:11:21.106088 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.106163 kubelet[3251]: E0130 14:11:21.106094 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.106209 containerd[1824]: time="2025-01-30T14:11:21.106125698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:21.106304 kubelet[3251]: E0130 14:11:21.106269 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.106304 kubelet[3251]: W0130 14:11:21.106280 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.106304 kubelet[3251]: E0130 14:11:21.106293 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.106419 kubelet[3251]: E0130 14:11:21.106401 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.106419 kubelet[3251]: W0130 14:11:21.106406 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.106419 kubelet[3251]: E0130 14:11:21.106413 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.106508 kubelet[3251]: E0130 14:11:21.106503 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.106508 kubelet[3251]: W0130 14:11:21.106508 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.106551 kubelet[3251]: E0130 14:11:21.106514 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.106657 kubelet[3251]: E0130 14:11:21.106649 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.106678 kubelet[3251]: W0130 14:11:21.106659 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.106678 kubelet[3251]: E0130 14:11:21.106668 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.106799 kubelet[3251]: E0130 14:11:21.106793 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.106827 kubelet[3251]: W0130 14:11:21.106800 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.106827 kubelet[3251]: E0130 14:11:21.106812 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.106953 kubelet[3251]: E0130 14:11:21.106947 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.106975 kubelet[3251]: W0130 14:11:21.106954 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.106975 kubelet[3251]: E0130 14:11:21.106962 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107069 kubelet[3251]: E0130 14:11:21.107063 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107091 kubelet[3251]: W0130 14:11:21.107071 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107091 kubelet[3251]: E0130 14:11:21.107081 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107208 kubelet[3251]: E0130 14:11:21.107202 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107225 kubelet[3251]: W0130 14:11:21.107209 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107225 kubelet[3251]: E0130 14:11:21.107217 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107318 kubelet[3251]: E0130 14:11:21.107313 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107341 kubelet[3251]: W0130 14:11:21.107319 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107341 kubelet[3251]: E0130 14:11:21.107327 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107492 kubelet[3251]: E0130 14:11:21.107486 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107511 kubelet[3251]: W0130 14:11:21.107492 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107511 kubelet[3251]: E0130 14:11:21.107499 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107597 kubelet[3251]: E0130 14:11:21.107591 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107616 kubelet[3251]: W0130 14:11:21.107598 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107616 kubelet[3251]: E0130 14:11:21.107607 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107699 kubelet[3251]: E0130 14:11:21.107693 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107716 kubelet[3251]: W0130 14:11:21.107699 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107716 kubelet[3251]: E0130 14:11:21.107708 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107796 kubelet[3251]: E0130 14:11:21.107791 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107817 kubelet[3251]: W0130 14:11:21.107797 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107817 kubelet[3251]: E0130 14:11:21.107806 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.107924 kubelet[3251]: E0130 14:11:21.107918 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.107941 kubelet[3251]: W0130 14:11:21.107925 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.107941 kubelet[3251]: E0130 14:11:21.107933 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108033 kubelet[3251]: E0130 14:11:21.108028 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108054 kubelet[3251]: W0130 14:11:21.108034 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108054 kubelet[3251]: E0130 14:11:21.108044 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108156 kubelet[3251]: E0130 14:11:21.108150 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108176 kubelet[3251]: W0130 14:11:21.108157 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108176 kubelet[3251]: E0130 14:11:21.108171 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108262 kubelet[3251]: E0130 14:11:21.108254 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108262 kubelet[3251]: W0130 14:11:21.108261 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108316 kubelet[3251]: E0130 14:11:21.108273 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108377 kubelet[3251]: E0130 14:11:21.108371 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108396 kubelet[3251]: W0130 14:11:21.108378 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108396 kubelet[3251]: E0130 14:11:21.108387 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108485 kubelet[3251]: E0130 14:11:21.108479 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108507 kubelet[3251]: W0130 14:11:21.108487 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108507 kubelet[3251]: E0130 14:11:21.108496 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108602 kubelet[3251]: E0130 14:11:21.108597 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108623 kubelet[3251]: W0130 14:11:21.108603 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108623 kubelet[3251]: E0130 14:11:21.108612 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108752 kubelet[3251]: E0130 14:11:21.108747 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108779 kubelet[3251]: W0130 14:11:21.108753 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108779 kubelet[3251]: E0130 14:11:21.108761 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108836 kubelet[3251]: E0130 14:11:21.108830 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108858 kubelet[3251]: W0130 14:11:21.108836 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108858 kubelet[3251]: E0130 14:11:21.108841 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.108947 kubelet[3251]: E0130 14:11:21.108942 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.108968 kubelet[3251]: W0130 14:11:21.108947 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.108968 kubelet[3251]: E0130 14:11:21.108951 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.112599 systemd[1]: Started cri-containerd-5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607.scope - libcontainer container 5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607. Jan 30 14:11:21.114444 kubelet[3251]: E0130 14:11:21.114402 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:21.114444 kubelet[3251]: W0130 14:11:21.114413 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:21.114444 kubelet[3251]: E0130 14:11:21.114428 3251 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:21.122432 containerd[1824]: time="2025-01-30T14:11:21.122382217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hvw2w,Uid:e813f197-27e3-44ed-9ab5-464364170362,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607\"" Jan 30 14:11:21.124145 containerd[1824]: time="2025-01-30T14:11:21.124123315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:11:21.129478 containerd[1824]: time="2025-01-30T14:11:21.129429624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867855fc8-7hfd5,Uid:a6045b14-3089-4015-89cb-25c59b832f86,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ba9b7b592c5ccf592a07db01123a349a2b0bdd4554e0d0fecbd13cf10e38c12\"" Jan 30 14:11:22.421576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417642796.mount: Deactivated successfully. Jan 30 14:11:22.465682 containerd[1824]: time="2025-01-30T14:11:22.465625722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:22.465895 containerd[1824]: time="2025-01-30T14:11:22.465833221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 14:11:22.466174 containerd[1824]: time="2025-01-30T14:11:22.466146720Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:22.467109 containerd[1824]: time="2025-01-30T14:11:22.467057368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:22.467484 containerd[1824]: time="2025-01-30T14:11:22.467442878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.343294725s" Jan 30 14:11:22.467484 containerd[1824]: time="2025-01-30T14:11:22.467460260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 14:11:22.468073 containerd[1824]: time="2025-01-30T14:11:22.468062529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:11:22.468808 containerd[1824]: time="2025-01-30T14:11:22.468792864Z" level=info msg="CreateContainer within sandbox \"5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:11:22.474034 containerd[1824]: time="2025-01-30T14:11:22.473991911Z" level=info msg="CreateContainer within sandbox \"5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c\"" Jan 30 14:11:22.474313 containerd[1824]: time="2025-01-30T14:11:22.474302975Z" level=info msg="StartContainer for \"53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c\"" Jan 30 14:11:22.503284 systemd[1]: Started cri-containerd-53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c.scope - libcontainer container 53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c. Jan 30 14:11:22.518223 containerd[1824]: time="2025-01-30T14:11:22.518189534Z" level=info msg="StartContainer for \"53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c\" returns successfully" Jan 30 14:11:22.525303 systemd[1]: cri-containerd-53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c.scope: Deactivated successfully. Jan 30 14:11:22.785786 containerd[1824]: time="2025-01-30T14:11:22.785714420Z" level=info msg="shim disconnected" id=53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c namespace=k8s.io Jan 30 14:11:22.785786 containerd[1824]: time="2025-01-30T14:11:22.785781652Z" level=warning msg="cleaning up after shim disconnected" id=53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c namespace=k8s.io Jan 30 14:11:22.785786 containerd[1824]: time="2025-01-30T14:11:22.785786873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:11:22.846482 kubelet[3251]: E0130 14:11:22.846414 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwmxx" podUID="7f648eb2-8d49-44e7-a889-00115811af73" Jan 30 14:11:22.897890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53646de14bdea16e4e59310b4e7bc2b3968dfa7f3f60f1694026c728267b346c-rootfs.mount: Deactivated successfully. Jan 30 14:11:24.217742 containerd[1824]: time="2025-01-30T14:11:24.217714101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:24.217973 containerd[1824]: time="2025-01-30T14:11:24.217872090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 14:11:24.218234 containerd[1824]: time="2025-01-30T14:11:24.218223804Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:24.219274 containerd[1824]: time="2025-01-30T14:11:24.219264192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:24.219709 containerd[1824]: time="2025-01-30T14:11:24.219693737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.751614547s" Jan 30 14:11:24.219732 containerd[1824]: time="2025-01-30T14:11:24.219715454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 14:11:24.220167 containerd[1824]: time="2025-01-30T14:11:24.220159137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:11:24.223352 containerd[1824]: time="2025-01-30T14:11:24.223331556Z" level=info msg="CreateContainer within sandbox \"2ba9b7b592c5ccf592a07db01123a349a2b0bdd4554e0d0fecbd13cf10e38c12\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:11:24.227540 containerd[1824]: time="2025-01-30T14:11:24.227497428Z" level=info msg="CreateContainer within sandbox \"2ba9b7b592c5ccf592a07db01123a349a2b0bdd4554e0d0fecbd13cf10e38c12\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"89483158dd6d3c644e485421d674f6fd918969a3604f22cbcca4ba898a489926\"" Jan 30 14:11:24.227746 containerd[1824]: time="2025-01-30T14:11:24.227705430Z" level=info msg="StartContainer for \"89483158dd6d3c644e485421d674f6fd918969a3604f22cbcca4ba898a489926\"" Jan 30 14:11:24.252391 systemd[1]: Started cri-containerd-89483158dd6d3c644e485421d674f6fd918969a3604f22cbcca4ba898a489926.scope - libcontainer container 89483158dd6d3c644e485421d674f6fd918969a3604f22cbcca4ba898a489926. Jan 30 14:11:24.276811 containerd[1824]: time="2025-01-30T14:11:24.276784719Z" level=info msg="StartContainer for \"89483158dd6d3c644e485421d674f6fd918969a3604f22cbcca4ba898a489926\" returns successfully" Jan 30 14:11:24.845999 kubelet[3251]: E0130 14:11:24.845898 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwmxx" podUID="7f648eb2-8d49-44e7-a889-00115811af73" Jan 30 14:11:24.932490 kubelet[3251]: I0130 14:11:24.932443 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-867855fc8-7hfd5" podStartSLOduration=1.842278069 podStartE2EDuration="4.932431627s" podCreationTimestamp="2025-01-30 14:11:20 +0000 UTC" firstStartedPulling="2025-01-30 14:11:21.129954139 +0000 UTC m=+19.329400017" lastFinishedPulling="2025-01-30 14:11:24.220107697 +0000 UTC m=+22.419553575" observedRunningTime="2025-01-30 14:11:24.931895493 +0000 UTC m=+23.131341372" watchObservedRunningTime="2025-01-30 14:11:24.932431627 +0000 UTC m=+23.131877503" Jan 30 14:11:25.919529 kubelet[3251]: I0130 14:11:25.919497 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:11:26.495531 containerd[1824]: time="2025-01-30T14:11:26.495505076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:26.495788 containerd[1824]: time="2025-01-30T14:11:26.495751589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 14:11:26.496147 containerd[1824]: time="2025-01-30T14:11:26.496132775Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:26.497084 containerd[1824]: time="2025-01-30T14:11:26.497070190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:26.497520 containerd[1824]: time="2025-01-30T14:11:26.497506642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.27733295s" Jan 30 14:11:26.497565 containerd[1824]: time="2025-01-30T14:11:26.497523332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 14:11:26.498600 containerd[1824]: time="2025-01-30T14:11:26.498583639Z" level=info msg="CreateContainer within sandbox \"5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:11:26.503291 containerd[1824]: time="2025-01-30T14:11:26.503245506Z" level=info msg="CreateContainer within sandbox \"5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9\"" Jan 30 14:11:26.503514 containerd[1824]: time="2025-01-30T14:11:26.503464642Z" level=info msg="StartContainer for \"110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9\"" Jan 30 14:11:26.540374 systemd[1]: Started cri-containerd-110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9.scope - libcontainer container 110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9. Jan 30 14:11:26.561119 containerd[1824]: time="2025-01-30T14:11:26.561074362Z" level=info msg="StartContainer for \"110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9\" returns successfully" Jan 30 14:11:26.845634 kubelet[3251]: E0130 14:11:26.845535 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwmxx" podUID="7f648eb2-8d49-44e7-a889-00115811af73" Jan 30 14:11:27.120480 systemd[1]: cri-containerd-110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9.scope: Deactivated successfully. Jan 30 14:11:27.129764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9-rootfs.mount: Deactivated successfully. Jan 30 14:11:27.221175 kubelet[3251]: I0130 14:11:27.221085 3251 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:11:27.273824 kubelet[3251]: I0130 14:11:27.272368 3251 topology_manager.go:215] "Topology Admit Handler" podUID="3b058520-fe77-4288-8183-1294854fc085" podNamespace="kube-system" podName="coredns-7db6d8ff4d-84mks" Jan 30 14:11:27.274334 kubelet[3251]: I0130 14:11:27.274037 3251 topology_manager.go:215] "Topology Admit Handler" podUID="d87a2984-6eab-4a62-8e08-b01dcb68024f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zcffb" Jan 30 14:11:27.275191 kubelet[3251]: I0130 14:11:27.275099 3251 topology_manager.go:215] "Topology Admit Handler" podUID="d760cfdd-6a44-4898-b2ca-8056d278d0dd" podNamespace="calico-system" podName="calico-kube-controllers-74c89cd695-jzlrt" Jan 30 14:11:27.276262 kubelet[3251]: I0130 14:11:27.276186 3251 topology_manager.go:215] "Topology Admit Handler" podUID="a767c38c-841b-4ecb-8ffb-49c977092016" podNamespace="calico-apiserver" podName="calico-apiserver-5bb9cfb479-dvdnb" Jan 30 14:11:27.277307 kubelet[3251]: I0130 14:11:27.277238 3251 topology_manager.go:215] "Topology Admit Handler" podUID="5753cc97-d110-42c7-b5f2-97689b20b507" podNamespace="calico-apiserver" podName="calico-apiserver-5bb9cfb479-2xzmg" Jan 30 14:11:27.291375 systemd[1]: Created slice kubepods-burstable-pod3b058520_fe77_4288_8183_1294854fc085.slice - libcontainer container kubepods-burstable-pod3b058520_fe77_4288_8183_1294854fc085.slice. Jan 30 14:11:27.302054 systemd[1]: Created slice kubepods-burstable-podd87a2984_6eab_4a62_8e08_b01dcb68024f.slice - libcontainer container kubepods-burstable-podd87a2984_6eab_4a62_8e08_b01dcb68024f.slice. Jan 30 14:11:27.306813 systemd[1]: Created slice kubepods-besteffort-podd760cfdd_6a44_4898_b2ca_8056d278d0dd.slice - libcontainer container kubepods-besteffort-podd760cfdd_6a44_4898_b2ca_8056d278d0dd.slice. Jan 30 14:11:27.311714 systemd[1]: Created slice kubepods-besteffort-poda767c38c_841b_4ecb_8ffb_49c977092016.slice - libcontainer container kubepods-besteffort-poda767c38c_841b_4ecb_8ffb_49c977092016.slice. Jan 30 14:11:27.315062 systemd[1]: Created slice kubepods-besteffort-pod5753cc97_d110_42c7_b5f2_97689b20b507.slice - libcontainer container kubepods-besteffort-pod5753cc97_d110_42c7_b5f2_97689b20b507.slice. Jan 30 14:11:27.348635 kubelet[3251]: I0130 14:11:27.348591 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b058520-fe77-4288-8183-1294854fc085-config-volume\") pod \"coredns-7db6d8ff4d-84mks\" (UID: \"3b058520-fe77-4288-8183-1294854fc085\") " pod="kube-system/coredns-7db6d8ff4d-84mks" Jan 30 14:11:27.348635 kubelet[3251]: I0130 14:11:27.348617 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkd84\" (UniqueName: \"kubernetes.io/projected/a767c38c-841b-4ecb-8ffb-49c977092016-kube-api-access-qkd84\") pod \"calico-apiserver-5bb9cfb479-dvdnb\" (UID: \"a767c38c-841b-4ecb-8ffb-49c977092016\") " pod="calico-apiserver/calico-apiserver-5bb9cfb479-dvdnb" Jan 30 14:11:27.348745 kubelet[3251]: I0130 14:11:27.348638 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvsv5\" (UniqueName: \"kubernetes.io/projected/3b058520-fe77-4288-8183-1294854fc085-kube-api-access-wvsv5\") pod \"coredns-7db6d8ff4d-84mks\" (UID: \"3b058520-fe77-4288-8183-1294854fc085\") " pod="kube-system/coredns-7db6d8ff4d-84mks" Jan 30 14:11:27.348745 kubelet[3251]: I0130 14:11:27.348651 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvcsk\" (UniqueName: \"kubernetes.io/projected/d760cfdd-6a44-4898-b2ca-8056d278d0dd-kube-api-access-pvcsk\") pod \"calico-kube-controllers-74c89cd695-jzlrt\" (UID: \"d760cfdd-6a44-4898-b2ca-8056d278d0dd\") " pod="calico-system/calico-kube-controllers-74c89cd695-jzlrt" Jan 30 14:11:27.348745 kubelet[3251]: I0130 14:11:27.348666 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5753cc97-d110-42c7-b5f2-97689b20b507-calico-apiserver-certs\") pod \"calico-apiserver-5bb9cfb479-2xzmg\" (UID: \"5753cc97-d110-42c7-b5f2-97689b20b507\") " pod="calico-apiserver/calico-apiserver-5bb9cfb479-2xzmg" Jan 30 14:11:27.348745 kubelet[3251]: I0130 14:11:27.348704 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d87a2984-6eab-4a62-8e08-b01dcb68024f-config-volume\") pod \"coredns-7db6d8ff4d-zcffb\" (UID: \"d87a2984-6eab-4a62-8e08-b01dcb68024f\") " pod="kube-system/coredns-7db6d8ff4d-zcffb" Jan 30 14:11:27.348745 kubelet[3251]: I0130 14:11:27.348726 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a767c38c-841b-4ecb-8ffb-49c977092016-calico-apiserver-certs\") pod \"calico-apiserver-5bb9cfb479-dvdnb\" (UID: \"a767c38c-841b-4ecb-8ffb-49c977092016\") " pod="calico-apiserver/calico-apiserver-5bb9cfb479-dvdnb" Jan 30 14:11:27.348864 kubelet[3251]: I0130 14:11:27.348742 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxv9c\" (UniqueName: \"kubernetes.io/projected/5753cc97-d110-42c7-b5f2-97689b20b507-kube-api-access-dxv9c\") pod \"calico-apiserver-5bb9cfb479-2xzmg\" (UID: \"5753cc97-d110-42c7-b5f2-97689b20b507\") " pod="calico-apiserver/calico-apiserver-5bb9cfb479-2xzmg" Jan 30 14:11:27.348864 kubelet[3251]: I0130 14:11:27.348755 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lt96\" (UniqueName: \"kubernetes.io/projected/d87a2984-6eab-4a62-8e08-b01dcb68024f-kube-api-access-2lt96\") pod \"coredns-7db6d8ff4d-zcffb\" (UID: \"d87a2984-6eab-4a62-8e08-b01dcb68024f\") " pod="kube-system/coredns-7db6d8ff4d-zcffb" Jan 30 14:11:27.348864 kubelet[3251]: I0130 14:11:27.348768 3251 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d760cfdd-6a44-4898-b2ca-8056d278d0dd-tigera-ca-bundle\") pod \"calico-kube-controllers-74c89cd695-jzlrt\" (UID: \"d760cfdd-6a44-4898-b2ca-8056d278d0dd\") " pod="calico-system/calico-kube-controllers-74c89cd695-jzlrt" Jan 30 14:11:27.601297 containerd[1824]: time="2025-01-30T14:11:27.601172209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84mks,Uid:3b058520-fe77-4288-8183-1294854fc085,Namespace:kube-system,Attempt:0,}" Jan 30 14:11:27.605534 containerd[1824]: time="2025-01-30T14:11:27.605414220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zcffb,Uid:d87a2984-6eab-4a62-8e08-b01dcb68024f,Namespace:kube-system,Attempt:0,}" Jan 30 14:11:27.610765 containerd[1824]: time="2025-01-30T14:11:27.610651643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c89cd695-jzlrt,Uid:d760cfdd-6a44-4898-b2ca-8056d278d0dd,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:27.615065 containerd[1824]: time="2025-01-30T14:11:27.614943801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-dvdnb,Uid:a767c38c-841b-4ecb-8ffb-49c977092016,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:11:27.618392 containerd[1824]: time="2025-01-30T14:11:27.618278580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-2xzmg,Uid:5753cc97-d110-42c7-b5f2-97689b20b507,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:11:27.794083 containerd[1824]: time="2025-01-30T14:11:27.794027746Z" level=info msg="shim disconnected" id=110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9 namespace=k8s.io Jan 30 14:11:27.794083 containerd[1824]: time="2025-01-30T14:11:27.794078148Z" level=warning msg="cleaning up after shim disconnected" id=110f8f8bed742fa0865da58b0346437098bab273629095908dfa8da97c3065b9 namespace=k8s.io Jan 30 14:11:27.794083 containerd[1824]: time="2025-01-30T14:11:27.794083936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:11:27.843773 containerd[1824]: time="2025-01-30T14:11:27.843735345Z" level=error msg="Failed to destroy network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.843938 containerd[1824]: time="2025-01-30T14:11:27.843825399Z" level=error msg="Failed to destroy network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.844016 containerd[1824]: time="2025-01-30T14:11:27.844001492Z" level=error msg="encountered an error cleaning up failed sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.844049 containerd[1824]: time="2025-01-30T14:11:27.844038442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-dvdnb,Uid:a767c38c-841b-4ecb-8ffb-49c977092016,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.844117 containerd[1824]: time="2025-01-30T14:11:27.844053772Z" level=error msg="encountered an error cleaning up failed sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.844117 containerd[1824]: time="2025-01-30T14:11:27.844085484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zcffb,Uid:d87a2984-6eab-4a62-8e08-b01dcb68024f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.844323 kubelet[3251]: E0130 14:11:27.844294 3251 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.844369 kubelet[3251]: E0130 14:11:27.844360 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zcffb" Jan 30 14:11:27.844398 kubelet[3251]: E0130 14:11:27.844384 3251 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zcffb" Jan 30 14:11:27.844428 kubelet[3251]: E0130 14:11:27.844413 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zcffb_kube-system(d87a2984-6eab-4a62-8e08-b01dcb68024f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zcffb_kube-system(d87a2984-6eab-4a62-8e08-b01dcb68024f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zcffb" podUID="d87a2984-6eab-4a62-8e08-b01dcb68024f" Jan 30 14:11:27.844539 kubelet[3251]: E0130 14:11:27.844294 3251 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.844612 kubelet[3251]: E0130 14:11:27.844566 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9cfb479-dvdnb" Jan 30 14:11:27.844612 kubelet[3251]: E0130 14:11:27.844601 3251 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9cfb479-dvdnb" Jan 30 14:11:27.844686 kubelet[3251]: E0130 14:11:27.844640 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb9cfb479-dvdnb_calico-apiserver(a767c38c-841b-4ecb-8ffb-49c977092016)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb9cfb479-dvdnb_calico-apiserver(a767c38c-841b-4ecb-8ffb-49c977092016)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9cfb479-dvdnb" podUID="a767c38c-841b-4ecb-8ffb-49c977092016" Jan 30 14:11:27.845054 containerd[1824]: time="2025-01-30T14:11:27.845036929Z" level=error msg="Failed to destroy network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845287 containerd[1824]: time="2025-01-30T14:11:27.845215465Z" level=error msg="encountered an error cleaning up failed sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845287 containerd[1824]: time="2025-01-30T14:11:27.845265283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-2xzmg,Uid:5753cc97-d110-42c7-b5f2-97689b20b507,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845460 containerd[1824]: time="2025-01-30T14:11:27.845278648Z" level=error msg="Failed to destroy network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845498 kubelet[3251]: E0130 14:11:27.845391 3251 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845498 kubelet[3251]: E0130 14:11:27.845420 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9cfb479-2xzmg" Jan 30 14:11:27.845498 kubelet[3251]: E0130 14:11:27.845439 3251 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb9cfb479-2xzmg" Jan 30 14:11:27.845602 kubelet[3251]: E0130 14:11:27.845471 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb9cfb479-2xzmg_calico-apiserver(5753cc97-d110-42c7-b5f2-97689b20b507)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb9cfb479-2xzmg_calico-apiserver(5753cc97-d110-42c7-b5f2-97689b20b507)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9cfb479-2xzmg" podUID="5753cc97-d110-42c7-b5f2-97689b20b507" Jan 30 14:11:27.845645 containerd[1824]: time="2025-01-30T14:11:27.845521234Z" level=error msg="encountered an error cleaning up failed sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845645 containerd[1824]: time="2025-01-30T14:11:27.845551778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c89cd695-jzlrt,Uid:d760cfdd-6a44-4898-b2ca-8056d278d0dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845713 kubelet[3251]: E0130 14:11:27.845635 3251 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.845713 kubelet[3251]: E0130 14:11:27.845660 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c89cd695-jzlrt" Jan 30 14:11:27.845713 kubelet[3251]: E0130 14:11:27.845690 3251 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c89cd695-jzlrt" Jan 30 14:11:27.845702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549-shm.mount: Deactivated successfully. Jan 30 14:11:27.845907 kubelet[3251]: E0130 14:11:27.845716 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c89cd695-jzlrt_calico-system(d760cfdd-6a44-4898-b2ca-8056d278d0dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c89cd695-jzlrt_calico-system(d760cfdd-6a44-4898-b2ca-8056d278d0dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c89cd695-jzlrt" podUID="d760cfdd-6a44-4898-b2ca-8056d278d0dd" Jan 30 14:11:27.845764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff-shm.mount: Deactivated successfully. Jan 30 14:11:27.847095 containerd[1824]: time="2025-01-30T14:11:27.847082818Z" level=error msg="Failed to destroy network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.847242 containerd[1824]: time="2025-01-30T14:11:27.847230945Z" level=error msg="encountered an error cleaning up failed sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.847267 containerd[1824]: time="2025-01-30T14:11:27.847251187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84mks,Uid:3b058520-fe77-4288-8183-1294854fc085,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.847333 kubelet[3251]: E0130 14:11:27.847321 3251 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.847357 kubelet[3251]: E0130 14:11:27.847340 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-84mks" Jan 30 14:11:27.847357 kubelet[3251]: E0130 14:11:27.847350 3251 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-84mks" Jan 30 14:11:27.847394 kubelet[3251]: E0130 14:11:27.847367 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-84mks_kube-system(3b058520-fe77-4288-8183-1294854fc085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-84mks_kube-system(3b058520-fe77-4288-8183-1294854fc085)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-84mks" podUID="3b058520-fe77-4288-8183-1294854fc085" Jan 30 14:11:27.930711 kubelet[3251]: I0130 14:11:27.930606 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:11:27.930855 containerd[1824]: time="2025-01-30T14:11:27.930707564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:11:27.931203 containerd[1824]: time="2025-01-30T14:11:27.931156383Z" level=info msg="StopPodSandbox for \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\"" Jan 30 14:11:27.931319 containerd[1824]: time="2025-01-30T14:11:27.931279635Z" level=info msg="Ensure that sandbox 114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9 in task-service has been cleanup successfully" Jan 30 14:11:27.931461 kubelet[3251]: I0130 14:11:27.931451 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:11:27.931715 containerd[1824]: time="2025-01-30T14:11:27.931700884Z" level=info msg="StopPodSandbox for \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\"" Jan 30 14:11:27.931833 containerd[1824]: time="2025-01-30T14:11:27.931820076Z" level=info msg="Ensure that sandbox c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff in task-service has been cleanup successfully" Jan 30 14:11:27.931930 kubelet[3251]: I0130 14:11:27.931921 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:11:27.932147 containerd[1824]: time="2025-01-30T14:11:27.932132819Z" level=info msg="StopPodSandbox for \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\"" Jan 30 14:11:27.932258 containerd[1824]: time="2025-01-30T14:11:27.932246896Z" level=info msg="Ensure that sandbox c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956 in task-service has been cleanup successfully" Jan 30 14:11:27.932433 kubelet[3251]: I0130 14:11:27.932416 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:11:27.932744 containerd[1824]: time="2025-01-30T14:11:27.932723064Z" level=info msg="StopPodSandbox for \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\"" Jan 30 14:11:27.932883 containerd[1824]: time="2025-01-30T14:11:27.932866661Z" level=info msg="Ensure that sandbox 1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549 in task-service has been cleanup successfully" Jan 30 14:11:27.933419 kubelet[3251]: I0130 14:11:27.933000 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:11:27.933519 containerd[1824]: time="2025-01-30T14:11:27.933497462Z" level=info msg="StopPodSandbox for \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\"" Jan 30 14:11:27.933693 containerd[1824]: time="2025-01-30T14:11:27.933677757Z" level=info msg="Ensure that sandbox 800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836 in task-service has been cleanup successfully" Jan 30 14:11:27.956028 containerd[1824]: time="2025-01-30T14:11:27.955914475Z" level=error msg="StopPodSandbox for \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\" failed" error="failed to destroy network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.956241 containerd[1824]: time="2025-01-30T14:11:27.955934418Z" level=error msg="StopPodSandbox for \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\" failed" error="failed to destroy network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.956311 kubelet[3251]: E0130 14:11:27.956278 3251 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:11:27.956383 kubelet[3251]: E0130 14:11:27.956347 3251 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549"} Jan 30 14:11:27.956410 kubelet[3251]: E0130 14:11:27.956397 3251 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a767c38c-841b-4ecb-8ffb-49c977092016\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:27.956458 kubelet[3251]: E0130 14:11:27.956412 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a767c38c-841b-4ecb-8ffb-49c977092016\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9cfb479-dvdnb" podUID="a767c38c-841b-4ecb-8ffb-49c977092016" Jan 30 14:11:27.956458 kubelet[3251]: E0130 14:11:27.956275 3251 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:11:27.956458 kubelet[3251]: E0130 14:11:27.956434 3251 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9"} Jan 30 14:11:27.956458 kubelet[3251]: E0130 14:11:27.956451 3251 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5753cc97-d110-42c7-b5f2-97689b20b507\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:27.956553 kubelet[3251]: E0130 14:11:27.956461 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5753cc97-d110-42c7-b5f2-97689b20b507\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb9cfb479-2xzmg" podUID="5753cc97-d110-42c7-b5f2-97689b20b507" Jan 30 14:11:27.956665 containerd[1824]: time="2025-01-30T14:11:27.956644958Z" level=error msg="StopPodSandbox for \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\" failed" error="failed to destroy network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.956734 kubelet[3251]: E0130 14:11:27.956723 3251 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:11:27.956760 kubelet[3251]: E0130 14:11:27.956736 3251 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956"} Jan 30 14:11:27.956760 kubelet[3251]: E0130 14:11:27.956747 3251 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b058520-fe77-4288-8183-1294854fc085\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:27.956760 kubelet[3251]: E0130 14:11:27.956756 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b058520-fe77-4288-8183-1294854fc085\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-84mks" podUID="3b058520-fe77-4288-8183-1294854fc085" Jan 30 14:11:27.956845 kubelet[3251]: E0130 14:11:27.956809 3251 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:11:27.956845 kubelet[3251]: E0130 14:11:27.956826 3251 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff"} Jan 30 14:11:27.956883 containerd[1824]: time="2025-01-30T14:11:27.956745328Z" level=error msg="StopPodSandbox for \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\" failed" error="failed to destroy network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.956906 kubelet[3251]: E0130 14:11:27.956846 3251 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d87a2984-6eab-4a62-8e08-b01dcb68024f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:27.956906 kubelet[3251]: E0130 14:11:27.956857 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d87a2984-6eab-4a62-8e08-b01dcb68024f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zcffb" podUID="d87a2984-6eab-4a62-8e08-b01dcb68024f" Jan 30 14:11:27.957855 containerd[1824]: time="2025-01-30T14:11:27.957840620Z" level=error msg="StopPodSandbox for \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\" failed" error="failed to destroy network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:27.957941 kubelet[3251]: E0130 14:11:27.957929 3251 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:11:27.957970 kubelet[3251]: E0130 14:11:27.957945 3251 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836"} Jan 30 14:11:27.957970 kubelet[3251]: E0130 14:11:27.957958 3251 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d760cfdd-6a44-4898-b2ca-8056d278d0dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:27.958015 kubelet[3251]: E0130 14:11:27.957968 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d760cfdd-6a44-4898-b2ca-8056d278d0dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c89cd695-jzlrt" podUID="d760cfdd-6a44-4898-b2ca-8056d278d0dd" Jan 30 14:11:28.507036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9-shm.mount: Deactivated successfully. Jan 30 14:11:28.507088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956-shm.mount: Deactivated successfully. Jan 30 14:11:28.507154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836-shm.mount: Deactivated successfully. Jan 30 14:11:28.861248 systemd[1]: Created slice kubepods-besteffort-pod7f648eb2_8d49_44e7_a889_00115811af73.slice - libcontainer container kubepods-besteffort-pod7f648eb2_8d49_44e7_a889_00115811af73.slice. Jan 30 14:11:28.869558 containerd[1824]: time="2025-01-30T14:11:28.869522629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwmxx,Uid:7f648eb2-8d49-44e7-a889-00115811af73,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:28.902068 containerd[1824]: time="2025-01-30T14:11:28.902033489Z" level=error msg="Failed to destroy network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:28.902289 containerd[1824]: time="2025-01-30T14:11:28.902244834Z" level=error msg="encountered an error cleaning up failed sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:28.902289 containerd[1824]: time="2025-01-30T14:11:28.902285235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwmxx,Uid:7f648eb2-8d49-44e7-a889-00115811af73,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:28.902458 kubelet[3251]: E0130 14:11:28.902434 3251 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:28.902694 kubelet[3251]: E0130 14:11:28.902481 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gwmxx" Jan 30 14:11:28.902694 kubelet[3251]: E0130 14:11:28.902502 3251 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gwmxx" Jan 30 14:11:28.902694 kubelet[3251]: E0130 14:11:28.902544 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gwmxx_calico-system(7f648eb2-8d49-44e7-a889-00115811af73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gwmxx_calico-system(7f648eb2-8d49-44e7-a889-00115811af73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gwmxx" podUID="7f648eb2-8d49-44e7-a889-00115811af73" Jan 30 14:11:28.903613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010-shm.mount: Deactivated successfully. Jan 30 14:11:28.937726 kubelet[3251]: I0130 14:11:28.937698 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:11:28.938279 containerd[1824]: time="2025-01-30T14:11:28.938241438Z" level=info msg="StopPodSandbox for \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\"" Jan 30 14:11:28.938453 containerd[1824]: time="2025-01-30T14:11:28.938432736Z" level=info msg="Ensure that sandbox 308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010 in task-service has been cleanup successfully" Jan 30 14:11:28.960793 containerd[1824]: time="2025-01-30T14:11:28.960733135Z" level=error msg="StopPodSandbox for \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\" failed" error="failed to destroy network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:28.960897 kubelet[3251]: E0130 14:11:28.960866 3251 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:11:28.960929 kubelet[3251]: E0130 14:11:28.960896 3251 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010"} Jan 30 14:11:28.960929 kubelet[3251]: E0130 14:11:28.960921 3251 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f648eb2-8d49-44e7-a889-00115811af73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:28.960982 kubelet[3251]: E0130 14:11:28.960934 3251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f648eb2-8d49-44e7-a889-00115811af73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gwmxx" podUID="7f648eb2-8d49-44e7-a889-00115811af73" Jan 30 14:11:31.034761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107909988.mount: Deactivated successfully. Jan 30 14:11:31.057870 containerd[1824]: time="2025-01-30T14:11:31.057819686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:31.058126 containerd[1824]: time="2025-01-30T14:11:31.058110436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 14:11:31.058542 containerd[1824]: time="2025-01-30T14:11:31.058528219Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:31.059380 containerd[1824]: time="2025-01-30T14:11:31.059361857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:31.059747 containerd[1824]: time="2025-01-30T14:11:31.059731900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.128988805s" Jan 30 14:11:31.059800 containerd[1824]: time="2025-01-30T14:11:31.059748664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 14:11:31.063306 containerd[1824]: time="2025-01-30T14:11:31.063289591Z" level=info msg="CreateContainer within sandbox \"5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:11:31.068546 containerd[1824]: time="2025-01-30T14:11:31.068502574Z" level=info msg="CreateContainer within sandbox \"5c4cdf5135f515bcb446cf33672a8c37c7d616efc109544f5a899a338051f607\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fe7ce17594ea822302f780edd820e3469b212872584a87fea84afc3edac2a380\"" Jan 30 14:11:31.068756 containerd[1824]: time="2025-01-30T14:11:31.068716024Z" level=info msg="StartContainer for \"fe7ce17594ea822302f780edd820e3469b212872584a87fea84afc3edac2a380\"" Jan 30 14:11:31.088306 systemd[1]: Started cri-containerd-fe7ce17594ea822302f780edd820e3469b212872584a87fea84afc3edac2a380.scope - libcontainer container fe7ce17594ea822302f780edd820e3469b212872584a87fea84afc3edac2a380. Jan 30 14:11:31.102044 containerd[1824]: time="2025-01-30T14:11:31.102018620Z" level=info msg="StartContainer for \"fe7ce17594ea822302f780edd820e3469b212872584a87fea84afc3edac2a380\" returns successfully" Jan 30 14:11:31.162172 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:11:31.162231 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:11:31.953413 kubelet[3251]: I0130 14:11:31.953346 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hvw2w" podStartSLOduration=2.016561018 podStartE2EDuration="11.953330367s" podCreationTimestamp="2025-01-30 14:11:20 +0000 UTC" firstStartedPulling="2025-01-30 14:11:21.123346948 +0000 UTC m=+19.322792828" lastFinishedPulling="2025-01-30 14:11:31.060116299 +0000 UTC m=+29.259562177" observedRunningTime="2025-01-30 14:11:31.952889663 +0000 UTC m=+30.152335549" watchObservedRunningTime="2025-01-30 14:11:31.953330367 +0000 UTC m=+30.152776247" Jan 30 14:11:37.945345 kubelet[3251]: I0130 14:11:37.945225 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:11:38.621158 kernel: bpftool[5145]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:11:38.766472 systemd-networkd[1609]: vxlan.calico: Link UP Jan 30 14:11:38.766478 systemd-networkd[1609]: vxlan.calico: Gained carrier Jan 30 14:11:38.846484 containerd[1824]: time="2025-01-30T14:11:38.846452134Z" level=info msg="StopPodSandbox for \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\"" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.868 [INFO][5227] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.868 [INFO][5227] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" iface="eth0" netns="/var/run/netns/cni-273d92ce-2a55-9491-c6b0-bfaae788d4bb" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.868 [INFO][5227] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" iface="eth0" netns="/var/run/netns/cni-273d92ce-2a55-9491-c6b0-bfaae788d4bb" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.868 [INFO][5227] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" iface="eth0" netns="/var/run/netns/cni-273d92ce-2a55-9491-c6b0-bfaae788d4bb" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.868 [INFO][5227] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.868 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.879 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.879 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.879 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.882 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.883 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.883 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:38.885811 containerd[1824]: 2025-01-30 14:11:38.885 [INFO][5227] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:11:38.886095 containerd[1824]: time="2025-01-30T14:11:38.885862448Z" level=info msg="TearDown network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\" successfully" Jan 30 14:11:38.886095 containerd[1824]: time="2025-01-30T14:11:38.885897620Z" level=info msg="StopPodSandbox for \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\" returns successfully" Jan 30 14:11:38.886416 containerd[1824]: time="2025-01-30T14:11:38.886371170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c89cd695-jzlrt,Uid:d760cfdd-6a44-4898-b2ca-8056d278d0dd,Namespace:calico-system,Attempt:1,}" Jan 30 14:11:38.887346 systemd[1]: run-netns-cni\x2d273d92ce\x2d2a55\x2d9491\x2dc6b0\x2dbfaae788d4bb.mount: Deactivated successfully. Jan 30 14:11:38.943178 systemd-networkd[1609]: cali5ccf70109eb: Link UP Jan 30 14:11:38.943572 systemd-networkd[1609]: cali5ccf70109eb: Gained carrier Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.907 [INFO][5256] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0 calico-kube-controllers-74c89cd695- calico-system d760cfdd-6a44-4898-b2ca-8056d278d0dd 728 0 2025-01-30 14:11:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74c89cd695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-feecaa3039 calico-kube-controllers-74c89cd695-jzlrt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5ccf70109eb [] []}} ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.907 [INFO][5256] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.921 [INFO][5275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" HandleID="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.926 [INFO][5275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" HandleID="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-feecaa3039", "pod":"calico-kube-controllers-74c89cd695-jzlrt", "timestamp":"2025-01-30 14:11:38.921257746 +0000 UTC"}, Hostname:"ci-4081.3.0-a-feecaa3039", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.926 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.926 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.926 [INFO][5275] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-feecaa3039' Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.927 [INFO][5275] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.930 [INFO][5275] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.933 [INFO][5275] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.934 [INFO][5275] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.935 [INFO][5275] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.935 [INFO][5275] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.936 [INFO][5275] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.938 [INFO][5275] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.941 [INFO][5275] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.65/26] block=192.168.94.64/26 handle="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.941 [INFO][5275] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.65/26] handle="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.941 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:38.950179 containerd[1824]: 2025-01-30 14:11:38.941 [INFO][5275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.65/26] IPv6=[] ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" HandleID="k8s-pod-network.8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.950587 containerd[1824]: 2025-01-30 14:11:38.942 [INFO][5256] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0", GenerateName:"calico-kube-controllers-74c89cd695-", Namespace:"calico-system", SelfLink:"", UID:"d760cfdd-6a44-4898-b2ca-8056d278d0dd", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c89cd695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"", Pod:"calico-kube-controllers-74c89cd695-jzlrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccf70109eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:38.950587 containerd[1824]: 2025-01-30 14:11:38.942 [INFO][5256] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.65/32] ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.950587 containerd[1824]: 2025-01-30 14:11:38.942 [INFO][5256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ccf70109eb ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.950587 containerd[1824]: 2025-01-30 14:11:38.943 [INFO][5256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.950587 containerd[1824]: 2025-01-30 14:11:38.944 [INFO][5256] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0", GenerateName:"calico-kube-controllers-74c89cd695-", Namespace:"calico-system", SelfLink:"", UID:"d760cfdd-6a44-4898-b2ca-8056d278d0dd", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c89cd695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de", Pod:"calico-kube-controllers-74c89cd695-jzlrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccf70109eb", MAC:"32:84:1e:52:59:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:38.950587 containerd[1824]: 2025-01-30 14:11:38.949 [INFO][5256] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de" Namespace="calico-system" Pod="calico-kube-controllers-74c89cd695-jzlrt" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:11:38.959464 containerd[1824]: time="2025-01-30T14:11:38.959423148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:38.959464 containerd[1824]: time="2025-01-30T14:11:38.959454140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:38.959464 containerd[1824]: time="2025-01-30T14:11:38.959461527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:38.959576 containerd[1824]: time="2025-01-30T14:11:38.959502605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:38.975299 systemd[1]: Started cri-containerd-8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de.scope - libcontainer container 8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de. Jan 30 14:11:38.997847 containerd[1824]: time="2025-01-30T14:11:38.997825258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c89cd695-jzlrt,Uid:d760cfdd-6a44-4898-b2ca-8056d278d0dd,Namespace:calico-system,Attempt:1,} returns sandbox id \"8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de\"" Jan 30 14:11:38.998538 containerd[1824]: time="2025-01-30T14:11:38.998527610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:11:39.846344 containerd[1824]: time="2025-01-30T14:11:39.846311411Z" level=info msg="StopPodSandbox for \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\"" Jan 30 14:11:39.846458 containerd[1824]: time="2025-01-30T14:11:39.846311639Z" level=info msg="StopPodSandbox for \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\"" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" iface="eth0" netns="/var/run/netns/cni-fc57c4ca-d03c-087b-28fe-444e994f4670" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" iface="eth0" netns="/var/run/netns/cni-fc57c4ca-d03c-087b-28fe-444e994f4670" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" iface="eth0" netns="/var/run/netns/cni-fc57c4ca-d03c-087b-28fe-444e994f4670" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.883 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.883 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.883 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.887 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.887 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.888 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:39.889372 containerd[1824]: 2025-01-30 14:11:39.888 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:11:39.889812 containerd[1824]: time="2025-01-30T14:11:39.889429441Z" level=info msg="TearDown network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\" successfully" Jan 30 14:11:39.889812 containerd[1824]: time="2025-01-30T14:11:39.889457552Z" level=info msg="StopPodSandbox for \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\" returns successfully" Jan 30 14:11:39.889935 containerd[1824]: time="2025-01-30T14:11:39.889919469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-dvdnb,Uid:a767c38c-841b-4ecb-8ffb-49c977092016,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:11:39.891409 systemd[1]: run-netns-cni\x2dfc57c4ca\x2dd03c\x2d087b\x2d28fe\x2d444e994f4670.mount: Deactivated successfully. Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.871 [INFO][5417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.871 [INFO][5417] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" iface="eth0" netns="/var/run/netns/cni-f3e768fb-3fbe-4503-9f15-26e35bdb1510" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5417] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" iface="eth0" netns="/var/run/netns/cni-f3e768fb-3fbe-4503-9f15-26e35bdb1510" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5417] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" iface="eth0" netns="/var/run/netns/cni-f3e768fb-3fbe-4503-9f15-26e35bdb1510" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.872 [INFO][5417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.883 [INFO][5451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.883 [INFO][5451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.888 [INFO][5451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.891 [WARNING][5451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.891 [INFO][5451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.892 [INFO][5451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:39.894028 containerd[1824]: 2025-01-30 14:11:39.893 [INFO][5417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:11:39.894345 containerd[1824]: time="2025-01-30T14:11:39.894098761Z" level=info msg="TearDown network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\" successfully" Jan 30 14:11:39.894345 containerd[1824]: time="2025-01-30T14:11:39.894119788Z" level=info msg="StopPodSandbox for \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\" returns successfully" Jan 30 14:11:39.894505 containerd[1824]: time="2025-01-30T14:11:39.894491129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwmxx,Uid:7f648eb2-8d49-44e7-a889-00115811af73,Namespace:calico-system,Attempt:1,}" Jan 30 14:11:39.897564 systemd[1]: run-netns-cni\x2df3e768fb\x2d3fbe\x2d4503\x2d9f15\x2d26e35bdb1510.mount: Deactivated successfully. Jan 30 14:11:39.949363 systemd-networkd[1609]: calif2b7c8176a6: Link UP Jan 30 14:11:39.949480 systemd-networkd[1609]: calif2b7c8176a6: Gained carrier Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.914 [INFO][5480] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0 calico-apiserver-5bb9cfb479- calico-apiserver a767c38c-841b-4ecb-8ffb-49c977092016 738 0 2025-01-30 14:11:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bb9cfb479 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-feecaa3039 calico-apiserver-5bb9cfb479-dvdnb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif2b7c8176a6 [] []}} ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.914 [INFO][5480] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.929 [INFO][5522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" HandleID="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.935 [INFO][5522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" HandleID="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-feecaa3039", "pod":"calico-apiserver-5bb9cfb479-dvdnb", "timestamp":"2025-01-30 14:11:39.929483283 +0000 UTC"}, Hostname:"ci-4081.3.0-a-feecaa3039", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.935 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.935 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.935 [INFO][5522] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-feecaa3039' Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.936 [INFO][5522] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.938 [INFO][5522] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.940 [INFO][5522] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.941 [INFO][5522] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.942 [INFO][5522] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.942 [INFO][5522] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.943 [INFO][5522] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593 Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.945 [INFO][5522] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.947 [INFO][5522] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.66/26] block=192.168.94.64/26 handle="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.947 [INFO][5522] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.66/26] handle="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.947 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:39.955584 containerd[1824]: 2025-01-30 14:11:39.947 [INFO][5522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.66/26] IPv6=[] ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" HandleID="k8s-pod-network.79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.956021 containerd[1824]: 2025-01-30 14:11:39.948 [INFO][5480] cni-plugin/k8s.go 386: Populated endpoint ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"a767c38c-841b-4ecb-8ffb-49c977092016", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"", Pod:"calico-apiserver-5bb9cfb479-dvdnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2b7c8176a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:39.956021 containerd[1824]: 2025-01-30 14:11:39.948 [INFO][5480] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.66/32] ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.956021 containerd[1824]: 2025-01-30 14:11:39.948 [INFO][5480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2b7c8176a6 ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.956021 containerd[1824]: 2025-01-30 14:11:39.949 [INFO][5480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.956021 containerd[1824]: 2025-01-30 14:11:39.949 [INFO][5480] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"a767c38c-841b-4ecb-8ffb-49c977092016", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593", Pod:"calico-apiserver-5bb9cfb479-dvdnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2b7c8176a6", MAC:"da:a9:8c:71:ec:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:39.956021 containerd[1824]: 2025-01-30 14:11:39.953 [INFO][5480] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-dvdnb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:11:39.964221 systemd-networkd[1609]: vxlan.calico: Gained IPv6LL Jan 30 14:11:39.966239 containerd[1824]: time="2025-01-30T14:11:39.966192495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:39.966239 containerd[1824]: time="2025-01-30T14:11:39.966228480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:39.966339 containerd[1824]: time="2025-01-30T14:11:39.966239927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:39.966339 containerd[1824]: time="2025-01-30T14:11:39.966290287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:39.966760 systemd-networkd[1609]: cali3bb11ef6c47: Link UP Jan 30 14:11:39.966877 systemd-networkd[1609]: cali3bb11ef6c47: Gained carrier Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.917 [INFO][5491] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0 csi-node-driver- calico-system 7f648eb2-8d49-44e7-a889-00115811af73 737 0 2025-01-30 14:11:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-feecaa3039 csi-node-driver-gwmxx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3bb11ef6c47 [] []}} ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.917 [INFO][5491] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.932 [INFO][5531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" HandleID="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.936 [INFO][5531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" HandleID="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019c8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-feecaa3039", "pod":"csi-node-driver-gwmxx", "timestamp":"2025-01-30 14:11:39.931994708 +0000 UTC"}, Hostname:"ci-4081.3.0-a-feecaa3039", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.936 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.947 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.947 [INFO][5531] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-feecaa3039' Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.948 [INFO][5531] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.951 [INFO][5531] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.955 [INFO][5531] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.956 [INFO][5531] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.957 [INFO][5531] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.957 [INFO][5531] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.958 [INFO][5531] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.962 [INFO][5531] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.964 [INFO][5531] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.67/26] block=192.168.94.64/26 handle="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.964 [INFO][5531] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.67/26] handle="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.964 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:39.973708 containerd[1824]: 2025-01-30 14:11:39.964 [INFO][5531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.67/26] IPv6=[] ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" HandleID="k8s-pod-network.2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.974137 containerd[1824]: 2025-01-30 14:11:39.965 [INFO][5491] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f648eb2-8d49-44e7-a889-00115811af73", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"", Pod:"csi-node-driver-gwmxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3bb11ef6c47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:39.974137 containerd[1824]: 2025-01-30 14:11:39.966 [INFO][5491] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.67/32] ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.974137 containerd[1824]: 2025-01-30 14:11:39.966 [INFO][5491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bb11ef6c47 ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.974137 containerd[1824]: 2025-01-30 14:11:39.966 [INFO][5491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.974137 containerd[1824]: 2025-01-30 14:11:39.967 [INFO][5491] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f648eb2-8d49-44e7-a889-00115811af73", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd", Pod:"csi-node-driver-gwmxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3bb11ef6c47", MAC:"ea:f3:e1:8b:0e:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:39.974137 containerd[1824]: 2025-01-30 14:11:39.972 [INFO][5491] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd" Namespace="calico-system" Pod="csi-node-driver-gwmxx" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:11:39.982939 containerd[1824]: time="2025-01-30T14:11:39.982892661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:39.982939 containerd[1824]: time="2025-01-30T14:11:39.982925665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:39.982939 containerd[1824]: time="2025-01-30T14:11:39.982932837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:39.983059 containerd[1824]: time="2025-01-30T14:11:39.982975161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:39.985259 systemd[1]: Started cri-containerd-79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593.scope - libcontainer container 79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593. Jan 30 14:11:39.988935 systemd[1]: Started cri-containerd-2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd.scope - libcontainer container 2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd. Jan 30 14:11:39.998988 containerd[1824]: time="2025-01-30T14:11:39.998965685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwmxx,Uid:7f648eb2-8d49-44e7-a889-00115811af73,Namespace:calico-system,Attempt:1,} returns sandbox id \"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd\"" Jan 30 14:11:40.007390 containerd[1824]: time="2025-01-30T14:11:40.007367277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-dvdnb,Uid:a767c38c-841b-4ecb-8ffb-49c977092016,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593\"" Jan 30 14:11:40.219247 systemd-networkd[1609]: cali5ccf70109eb: Gained IPv6LL Jan 30 14:11:40.606280 containerd[1824]: time="2025-01-30T14:11:40.606255365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:40.606494 containerd[1824]: time="2025-01-30T14:11:40.606472431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 14:11:40.606820 containerd[1824]: time="2025-01-30T14:11:40.606809049Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:40.607731 containerd[1824]: time="2025-01-30T14:11:40.607719581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:40.608204 containerd[1824]: time="2025-01-30T14:11:40.608191010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.609647693s" Jan 30 14:11:40.608226 containerd[1824]: time="2025-01-30T14:11:40.608208224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 14:11:40.608745 containerd[1824]: time="2025-01-30T14:11:40.608736873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:11:40.612327 containerd[1824]: time="2025-01-30T14:11:40.612313572Z" level=info msg="CreateContainer within sandbox \"8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:11:40.616665 containerd[1824]: time="2025-01-30T14:11:40.616652414Z" level=info msg="CreateContainer within sandbox \"8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2f21340e75f96d8ca551d1a8a19cc37de4ec24822ecc9220639db4394ec50f5d\"" Jan 30 14:11:40.616943 containerd[1824]: time="2025-01-30T14:11:40.616933734Z" level=info msg="StartContainer for \"2f21340e75f96d8ca551d1a8a19cc37de4ec24822ecc9220639db4394ec50f5d\"" Jan 30 14:11:40.641408 systemd[1]: Started cri-containerd-2f21340e75f96d8ca551d1a8a19cc37de4ec24822ecc9220639db4394ec50f5d.scope - libcontainer container 2f21340e75f96d8ca551d1a8a19cc37de4ec24822ecc9220639db4394ec50f5d. Jan 30 14:11:40.665555 containerd[1824]: time="2025-01-30T14:11:40.665500779Z" level=info msg="StartContainer for \"2f21340e75f96d8ca551d1a8a19cc37de4ec24822ecc9220639db4394ec50f5d\" returns successfully" Jan 30 14:11:40.846898 containerd[1824]: time="2025-01-30T14:11:40.846781673Z" level=info msg="StopPodSandbox for \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\"" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.910 [INFO][5743] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.910 [INFO][5743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" iface="eth0" netns="/var/run/netns/cni-e5389368-9f6d-7bb7-11a6-6412cdb73518" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.910 [INFO][5743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" iface="eth0" netns="/var/run/netns/cni-e5389368-9f6d-7bb7-11a6-6412cdb73518" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.911 [INFO][5743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" iface="eth0" netns="/var/run/netns/cni-e5389368-9f6d-7bb7-11a6-6412cdb73518" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.911 [INFO][5743] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.911 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.925 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.925 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.925 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.929 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.929 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.930 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:40.931472 containerd[1824]: 2025-01-30 14:11:40.930 [INFO][5743] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:11:40.931980 containerd[1824]: time="2025-01-30T14:11:40.931515079Z" level=info msg="TearDown network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\" successfully" Jan 30 14:11:40.931980 containerd[1824]: time="2025-01-30T14:11:40.931543414Z" level=info msg="StopPodSandbox for \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\" returns successfully" Jan 30 14:11:40.932047 containerd[1824]: time="2025-01-30T14:11:40.932030349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zcffb,Uid:d87a2984-6eab-4a62-8e08-b01dcb68024f,Namespace:kube-system,Attempt:1,}" Jan 30 14:11:40.933265 systemd[1]: run-netns-cni\x2de5389368\x2d9f6d\x2d7bb7\x2d11a6\x2d6412cdb73518.mount: Deactivated successfully. Jan 30 14:11:40.970217 kubelet[3251]: I0130 14:11:40.970175 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74c89cd695-jzlrt" podStartSLOduration=19.359894893 podStartE2EDuration="20.970162014s" podCreationTimestamp="2025-01-30 14:11:20 +0000 UTC" firstStartedPulling="2025-01-30 14:11:38.998417822 +0000 UTC m=+37.197863700" lastFinishedPulling="2025-01-30 14:11:40.608684943 +0000 UTC m=+38.808130821" observedRunningTime="2025-01-30 14:11:40.969694077 +0000 UTC m=+39.169139955" watchObservedRunningTime="2025-01-30 14:11:40.970162014 +0000 UTC m=+39.169607889" Jan 30 14:11:40.987712 systemd-networkd[1609]: calib4e46fb8ca2: Link UP Jan 30 14:11:40.987838 systemd-networkd[1609]: calib4e46fb8ca2: Gained carrier Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.952 [INFO][5777] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0 coredns-7db6d8ff4d- kube-system d87a2984-6eab-4a62-8e08-b01dcb68024f 756 0 2025-01-30 14:11:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-feecaa3039 coredns-7db6d8ff4d-zcffb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4e46fb8ca2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.952 [INFO][5777] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.967 [INFO][5796] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" HandleID="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.972 [INFO][5796] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" HandleID="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028bdb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-feecaa3039", "pod":"coredns-7db6d8ff4d-zcffb", "timestamp":"2025-01-30 14:11:40.96731863 +0000 UTC"}, Hostname:"ci-4081.3.0-a-feecaa3039", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.972 [INFO][5796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.972 [INFO][5796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.972 [INFO][5796] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-feecaa3039' Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.973 [INFO][5796] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.975 [INFO][5796] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.977 [INFO][5796] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.978 [INFO][5796] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.979 [INFO][5796] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.979 [INFO][5796] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.980 [INFO][5796] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0 Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.982 [INFO][5796] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.985 [INFO][5796] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.68/26] block=192.168.94.64/26 handle="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.985 [INFO][5796] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.68/26] handle="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.986 [INFO][5796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:40.993983 containerd[1824]: 2025-01-30 14:11:40.986 [INFO][5796] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.68/26] IPv6=[] ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" HandleID="k8s-pod-network.edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.994437 containerd[1824]: 2025-01-30 14:11:40.986 [INFO][5777] cni-plugin/k8s.go 386: Populated endpoint ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d87a2984-6eab-4a62-8e08-b01dcb68024f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"", Pod:"coredns-7db6d8ff4d-zcffb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e46fb8ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:40.994437 containerd[1824]: 2025-01-30 14:11:40.986 [INFO][5777] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.68/32] ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.994437 containerd[1824]: 2025-01-30 14:11:40.987 [INFO][5777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4e46fb8ca2 ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.994437 containerd[1824]: 2025-01-30 14:11:40.987 [INFO][5777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:40.994437 containerd[1824]: 2025-01-30 14:11:40.987 [INFO][5777] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d87a2984-6eab-4a62-8e08-b01dcb68024f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0", Pod:"coredns-7db6d8ff4d-zcffb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e46fb8ca2", MAC:"2e:2f:7b:2d:19:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:40.994589 containerd[1824]: 2025-01-30 14:11:40.993 [INFO][5777] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zcffb" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:11:41.004066 containerd[1824]: time="2025-01-30T14:11:41.004028108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:41.004066 containerd[1824]: time="2025-01-30T14:11:41.004057925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:41.004066 containerd[1824]: time="2025-01-30T14:11:41.004065262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:41.004189 containerd[1824]: time="2025-01-30T14:11:41.004111114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:41.030314 systemd[1]: Started cri-containerd-edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0.scope - libcontainer container edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0. Jan 30 14:11:41.054699 containerd[1824]: time="2025-01-30T14:11:41.054673613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zcffb,Uid:d87a2984-6eab-4a62-8e08-b01dcb68024f,Namespace:kube-system,Attempt:1,} returns sandbox id \"edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0\"" Jan 30 14:11:41.056147 containerd[1824]: time="2025-01-30T14:11:41.056129320Z" level=info msg="CreateContainer within sandbox \"edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:11:41.060446 containerd[1824]: time="2025-01-30T14:11:41.060404667Z" level=info msg="CreateContainer within sandbox \"edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71c54e5a1810532279e47e6db80fb69cee5654dba8d51bdde263e158b32206c3\"" Jan 30 14:11:41.060602 containerd[1824]: time="2025-01-30T14:11:41.060586418Z" level=info msg="StartContainer for \"71c54e5a1810532279e47e6db80fb69cee5654dba8d51bdde263e158b32206c3\"" Jan 30 14:11:41.087386 systemd[1]: Started cri-containerd-71c54e5a1810532279e47e6db80fb69cee5654dba8d51bdde263e158b32206c3.scope - libcontainer container 71c54e5a1810532279e47e6db80fb69cee5654dba8d51bdde263e158b32206c3. Jan 30 14:11:41.101612 containerd[1824]: time="2025-01-30T14:11:41.101555638Z" level=info msg="StartContainer for \"71c54e5a1810532279e47e6db80fb69cee5654dba8d51bdde263e158b32206c3\" returns successfully" Jan 30 14:11:41.627387 systemd-networkd[1609]: calif2b7c8176a6: Gained IPv6LL Jan 30 14:11:41.819461 systemd-networkd[1609]: cali3bb11ef6c47: Gained IPv6LL Jan 30 14:11:41.878311 containerd[1824]: time="2025-01-30T14:11:41.878212085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:41.878388 containerd[1824]: time="2025-01-30T14:11:41.878365414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 14:11:41.878934 containerd[1824]: time="2025-01-30T14:11:41.878892274Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:41.879957 containerd[1824]: time="2025-01-30T14:11:41.879917606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:41.880408 containerd[1824]: time="2025-01-30T14:11:41.880372421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.271606629s" Jan 30 14:11:41.880408 containerd[1824]: time="2025-01-30T14:11:41.880386860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 14:11:41.881067 containerd[1824]: time="2025-01-30T14:11:41.881057214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:11:41.881816 containerd[1824]: time="2025-01-30T14:11:41.881803276Z" level=info msg="CreateContainer within sandbox \"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:11:41.887386 containerd[1824]: time="2025-01-30T14:11:41.887372725Z" level=info msg="CreateContainer within sandbox \"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7b5595f8502f1664626d9f923d7ad19d3eea9368580e7de307d0e652a3ccb7f1\"" Jan 30 14:11:41.887710 containerd[1824]: time="2025-01-30T14:11:41.887664396Z" level=info msg="StartContainer for \"7b5595f8502f1664626d9f923d7ad19d3eea9368580e7de307d0e652a3ccb7f1\"" Jan 30 14:11:41.916408 systemd[1]: Started cri-containerd-7b5595f8502f1664626d9f923d7ad19d3eea9368580e7de307d0e652a3ccb7f1.scope - libcontainer container 7b5595f8502f1664626d9f923d7ad19d3eea9368580e7de307d0e652a3ccb7f1. Jan 30 14:11:41.931332 containerd[1824]: time="2025-01-30T14:11:41.931304357Z" level=info msg="StartContainer for \"7b5595f8502f1664626d9f923d7ad19d3eea9368580e7de307d0e652a3ccb7f1\" returns successfully" Jan 30 14:11:41.969609 kubelet[3251]: I0130 14:11:41.969581 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:11:41.978791 kubelet[3251]: I0130 14:11:41.978721 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zcffb" podStartSLOduration=26.978697079 podStartE2EDuration="26.978697079s" podCreationTimestamp="2025-01-30 14:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:41.978226313 +0000 UTC m=+40.177672217" watchObservedRunningTime="2025-01-30 14:11:41.978697079 +0000 UTC m=+40.178142975" Jan 30 14:11:42.847827 containerd[1824]: time="2025-01-30T14:11:42.847733003Z" level=info msg="StopPodSandbox for \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\"" Jan 30 14:11:42.847827 containerd[1824]: time="2025-01-30T14:11:42.847805951Z" level=info msg="StopPodSandbox for \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\"" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.889 [INFO][5992] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.889 [INFO][5992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" iface="eth0" netns="/var/run/netns/cni-c50d9480-ca05-8957-6a01-58ff85d829af" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" iface="eth0" netns="/var/run/netns/cni-c50d9480-ca05-8957-6a01-58ff85d829af" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" iface="eth0" netns="/var/run/netns/cni-c50d9480-ca05-8957-6a01-58ff85d829af" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5992] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.900 [INFO][6023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.900 [INFO][6023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.900 [INFO][6023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.904 [WARNING][6023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.904 [INFO][6023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.905 [INFO][6023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:42.907145 containerd[1824]: 2025-01-30 14:11:42.906 [INFO][5992] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:11:42.907687 containerd[1824]: time="2025-01-30T14:11:42.907248567Z" level=info msg="TearDown network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\" successfully" Jan 30 14:11:42.907687 containerd[1824]: time="2025-01-30T14:11:42.907276309Z" level=info msg="StopPodSandbox for \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\" returns successfully" Jan 30 14:11:42.907213 systemd-networkd[1609]: calib4e46fb8ca2: Gained IPv6LL Jan 30 14:11:42.907917 containerd[1824]: time="2025-01-30T14:11:42.907890952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84mks,Uid:3b058520-fe77-4288-8183-1294854fc085,Namespace:kube-system,Attempt:1,}" Jan 30 14:11:42.909355 systemd[1]: run-netns-cni\x2dc50d9480\x2dca05\x2d8957\x2d6a01\x2d58ff85d829af.mount: Deactivated successfully. Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" iface="eth0" netns="/var/run/netns/cni-3b263fab-462d-5759-61e5-c323aa82233d" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" iface="eth0" netns="/var/run/netns/cni-3b263fab-462d-5759-61e5-c323aa82233d" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" iface="eth0" netns="/var/run/netns/cni-3b263fab-462d-5759-61e5-c323aa82233d" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.890 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.900 [INFO][6024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.900 [INFO][6024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.905 [INFO][6024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.909 [WARNING][6024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.909 [INFO][6024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.910 [INFO][6024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:42.911390 containerd[1824]: 2025-01-30 14:11:42.910 [INFO][5993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:11:42.911635 containerd[1824]: time="2025-01-30T14:11:42.911469644Z" level=info msg="TearDown network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\" successfully" Jan 30 14:11:42.911635 containerd[1824]: time="2025-01-30T14:11:42.911481621Z" level=info msg="StopPodSandbox for \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\" returns successfully" Jan 30 14:11:42.911852 containerd[1824]: time="2025-01-30T14:11:42.911838719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-2xzmg,Uid:5753cc97-d110-42c7-b5f2-97689b20b507,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:11:42.915234 systemd[1]: run-netns-cni\x2d3b263fab\x2d462d\x2d5759\x2d61e5\x2dc323aa82233d.mount: Deactivated successfully. Jan 30 14:11:42.969797 systemd-networkd[1609]: calia1fdf8d8e33: Link UP Jan 30 14:11:42.969926 systemd-networkd[1609]: calia1fdf8d8e33: Gained carrier Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.932 [INFO][6056] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0 coredns-7db6d8ff4d- kube-system 3b058520-fe77-4288-8183-1294854fc085 777 0 2025-01-30 14:11:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-feecaa3039 coredns-7db6d8ff4d-84mks eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia1fdf8d8e33 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.933 [INFO][6056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.949 [INFO][6102] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" HandleID="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.954 [INFO][6102] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" HandleID="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019dd30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-feecaa3039", "pod":"coredns-7db6d8ff4d-84mks", "timestamp":"2025-01-30 14:11:42.949287925 +0000 UTC"}, Hostname:"ci-4081.3.0-a-feecaa3039", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.954 [INFO][6102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.954 [INFO][6102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.954 [INFO][6102] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-feecaa3039' Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.955 [INFO][6102] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.957 [INFO][6102] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.959 [INFO][6102] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.960 [INFO][6102] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.961 [INFO][6102] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.961 [INFO][6102] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.962 [INFO][6102] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1 Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.964 [INFO][6102] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.968 [INFO][6102] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.69/26] block=192.168.94.64/26 handle="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.968 [INFO][6102] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.69/26] handle="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.968 [INFO][6102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:42.976386 containerd[1824]: 2025-01-30 14:11:42.968 [INFO][6102] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.69/26] IPv6=[] ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" HandleID="k8s-pod-network.fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.976802 containerd[1824]: 2025-01-30 14:11:42.968 [INFO][6056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b058520-fe77-4288-8183-1294854fc085", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"", Pod:"coredns-7db6d8ff4d-84mks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1fdf8d8e33", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:42.976802 containerd[1824]: 2025-01-30 14:11:42.969 [INFO][6056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.69/32] ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.976802 containerd[1824]: 2025-01-30 14:11:42.969 [INFO][6056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1fdf8d8e33 ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.976802 containerd[1824]: 2025-01-30 14:11:42.969 [INFO][6056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.976802 containerd[1824]: 2025-01-30 14:11:42.970 [INFO][6056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b058520-fe77-4288-8183-1294854fc085", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1", Pod:"coredns-7db6d8ff4d-84mks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1fdf8d8e33", MAC:"3e:89:20:24:2a:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:42.976959 containerd[1824]: 2025-01-30 14:11:42.975 [INFO][6056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84mks" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:11:42.987413 systemd-networkd[1609]: calid95b6b67f91: Link UP Jan 30 14:11:42.987535 systemd-networkd[1609]: calid95b6b67f91: Gained carrier Jan 30 14:11:42.989078 containerd[1824]: time="2025-01-30T14:11:42.989035769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:42.989078 containerd[1824]: time="2025-01-30T14:11:42.989063555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:42.989078 containerd[1824]: time="2025-01-30T14:11:42.989070756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:42.989248 containerd[1824]: time="2025-01-30T14:11:42.989132861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.938 [INFO][6072] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0 calico-apiserver-5bb9cfb479- calico-apiserver 5753cc97-d110-42c7-b5f2-97689b20b507 778 0 2025-01-30 14:11:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bb9cfb479 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-feecaa3039 calico-apiserver-5bb9cfb479-2xzmg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid95b6b67f91 [] []}} ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.938 [INFO][6072] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.952 [INFO][6107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" HandleID="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.956 [INFO][6107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" HandleID="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029bb30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-feecaa3039", "pod":"calico-apiserver-5bb9cfb479-2xzmg", "timestamp":"2025-01-30 14:11:42.952399468 +0000 UTC"}, Hostname:"ci-4081.3.0-a-feecaa3039", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.956 [INFO][6107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.968 [INFO][6107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.968 [INFO][6107] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-feecaa3039' Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.969 [INFO][6107] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.971 [INFO][6107] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.974 [INFO][6107] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.976 [INFO][6107] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.978 [INFO][6107] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.978 [INFO][6107] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.979 [INFO][6107] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1 Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.982 [INFO][6107] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.985 [INFO][6107] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.70/26] block=192.168.94.64/26 handle="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.985 [INFO][6107] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.70/26] handle="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" host="ci-4081.3.0-a-feecaa3039" Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.985 [INFO][6107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:42.993808 containerd[1824]: 2025-01-30 14:11:42.985 [INFO][6107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.70/26] IPv6=[] ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" HandleID="k8s-pod-network.3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.994299 containerd[1824]: 2025-01-30 14:11:42.986 [INFO][6072] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"5753cc97-d110-42c7-b5f2-97689b20b507", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"", Pod:"calico-apiserver-5bb9cfb479-2xzmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid95b6b67f91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:42.994299 containerd[1824]: 2025-01-30 14:11:42.986 [INFO][6072] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.70/32] ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.994299 containerd[1824]: 2025-01-30 14:11:42.986 [INFO][6072] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid95b6b67f91 ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.994299 containerd[1824]: 2025-01-30 14:11:42.987 [INFO][6072] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:42.994299 containerd[1824]: 2025-01-30 14:11:42.987 [INFO][6072] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"5753cc97-d110-42c7-b5f2-97689b20b507", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1", Pod:"calico-apiserver-5bb9cfb479-2xzmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid95b6b67f91", MAC:"3a:75:3d:82:97:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:42.994299 containerd[1824]: 2025-01-30 14:11:42.992 [INFO][6072] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1" Namespace="calico-apiserver" Pod="calico-apiserver-5bb9cfb479-2xzmg" WorkloadEndpoint="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:11:43.003340 containerd[1824]: time="2025-01-30T14:11:43.003289400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:43.003340 containerd[1824]: time="2025-01-30T14:11:43.003328296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:43.003462 containerd[1824]: time="2025-01-30T14:11:43.003343565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:43.003462 containerd[1824]: time="2025-01-30T14:11:43.003419835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:43.014260 systemd[1]: Started cri-containerd-fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1.scope - libcontainer container fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1. Jan 30 14:11:43.015853 systemd[1]: Started cri-containerd-3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1.scope - libcontainer container 3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1. Jan 30 14:11:43.037992 containerd[1824]: time="2025-01-30T14:11:43.037967718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84mks,Uid:3b058520-fe77-4288-8183-1294854fc085,Namespace:kube-system,Attempt:1,} returns sandbox id \"fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1\"" Jan 30 14:11:43.038611 containerd[1824]: time="2025-01-30T14:11:43.038598545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb9cfb479-2xzmg,Uid:5753cc97-d110-42c7-b5f2-97689b20b507,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1\"" Jan 30 14:11:43.039397 containerd[1824]: time="2025-01-30T14:11:43.039382453Z" level=info msg="CreateContainer within sandbox \"fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:11:43.043969 containerd[1824]: time="2025-01-30T14:11:43.043929666Z" level=info msg="CreateContainer within sandbox \"fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59dd8e84e338a144524bfd3ce77ac3c2fc6bca50b9262cd6811220ed04dd9ac4\"" Jan 30 14:11:43.044205 containerd[1824]: time="2025-01-30T14:11:43.044171561Z" level=info msg="StartContainer for \"59dd8e84e338a144524bfd3ce77ac3c2fc6bca50b9262cd6811220ed04dd9ac4\"" Jan 30 14:11:43.071191 systemd[1]: Started cri-containerd-59dd8e84e338a144524bfd3ce77ac3c2fc6bca50b9262cd6811220ed04dd9ac4.scope - libcontainer container 59dd8e84e338a144524bfd3ce77ac3c2fc6bca50b9262cd6811220ed04dd9ac4. Jan 30 14:11:43.083564 containerd[1824]: time="2025-01-30T14:11:43.083537690Z" level=info msg="StartContainer for \"59dd8e84e338a144524bfd3ce77ac3c2fc6bca50b9262cd6811220ed04dd9ac4\" returns successfully" Jan 30 14:11:43.697399 containerd[1824]: time="2025-01-30T14:11:43.697342764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:43.697568 containerd[1824]: time="2025-01-30T14:11:43.697543062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 14:11:43.697863 containerd[1824]: time="2025-01-30T14:11:43.697820946Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:43.699153 containerd[1824]: time="2025-01-30T14:11:43.699128293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:43.699498 containerd[1824]: time="2025-01-30T14:11:43.699458501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.818385398s" Jan 30 14:11:43.699498 containerd[1824]: time="2025-01-30T14:11:43.699473592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 14:11:43.699962 containerd[1824]: time="2025-01-30T14:11:43.699925365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:11:43.700523 containerd[1824]: time="2025-01-30T14:11:43.700511627Z" level=info msg="CreateContainer within sandbox \"79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:11:43.704927 containerd[1824]: time="2025-01-30T14:11:43.704886815Z" level=info msg="CreateContainer within sandbox \"79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"58e7a6c550a9d9f0bf7ad63a5497d4c91e357683ef8e93149b10f2b46e6cbd1a\"" Jan 30 14:11:43.705221 containerd[1824]: time="2025-01-30T14:11:43.705165080Z" level=info msg="StartContainer for \"58e7a6c550a9d9f0bf7ad63a5497d4c91e357683ef8e93149b10f2b46e6cbd1a\"" Jan 30 14:11:43.727437 systemd[1]: Started cri-containerd-58e7a6c550a9d9f0bf7ad63a5497d4c91e357683ef8e93149b10f2b46e6cbd1a.scope - libcontainer container 58e7a6c550a9d9f0bf7ad63a5497d4c91e357683ef8e93149b10f2b46e6cbd1a. Jan 30 14:11:43.752246 containerd[1824]: time="2025-01-30T14:11:43.752192613Z" level=info msg="StartContainer for \"58e7a6c550a9d9f0bf7ad63a5497d4c91e357683ef8e93149b10f2b46e6cbd1a\" returns successfully" Jan 30 14:11:43.997780 kubelet[3251]: I0130 14:11:43.997691 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bb9cfb479-dvdnb" podStartSLOduration=20.305687759 podStartE2EDuration="23.997669079s" podCreationTimestamp="2025-01-30 14:11:20 +0000 UTC" firstStartedPulling="2025-01-30 14:11:40.007892794 +0000 UTC m=+38.207338669" lastFinishedPulling="2025-01-30 14:11:43.699874111 +0000 UTC m=+41.899319989" observedRunningTime="2025-01-30 14:11:43.997427079 +0000 UTC m=+42.196872960" watchObservedRunningTime="2025-01-30 14:11:43.997669079 +0000 UTC m=+42.197114962" Jan 30 14:11:44.003244 kubelet[3251]: I0130 14:11:44.003157 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-84mks" podStartSLOduration=29.00312594 podStartE2EDuration="29.00312594s" podCreationTimestamp="2025-01-30 14:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:44.002998977 +0000 UTC m=+42.202444866" watchObservedRunningTime="2025-01-30 14:11:44.00312594 +0000 UTC m=+42.202571815" Jan 30 14:11:44.507348 systemd-networkd[1609]: calia1fdf8d8e33: Gained IPv6LL Jan 30 14:11:44.891411 systemd-networkd[1609]: calid95b6b67f91: Gained IPv6LL Jan 30 14:11:44.975828 kubelet[3251]: I0130 14:11:44.975732 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:11:45.036090 containerd[1824]: time="2025-01-30T14:11:45.036063816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:45.036318 containerd[1824]: time="2025-01-30T14:11:45.036296877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 14:11:45.036660 containerd[1824]: time="2025-01-30T14:11:45.036646872Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:45.037585 containerd[1824]: time="2025-01-30T14:11:45.037575076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:45.037996 containerd[1824]: time="2025-01-30T14:11:45.037982610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.338045075s" Jan 30 14:11:45.038020 containerd[1824]: time="2025-01-30T14:11:45.037999801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 14:11:45.038539 containerd[1824]: time="2025-01-30T14:11:45.038502194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:11:45.039210 containerd[1824]: time="2025-01-30T14:11:45.039169708Z" level=info msg="CreateContainer within sandbox \"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:11:45.043818 containerd[1824]: time="2025-01-30T14:11:45.043773649Z" level=info msg="CreateContainer within sandbox \"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03c65d28ff94bb8603eac032650b0a0e36afa5c02cbeb0336f1158f72a8b247c\"" Jan 30 14:11:45.044013 containerd[1824]: time="2025-01-30T14:11:45.044001518Z" level=info msg="StartContainer for \"03c65d28ff94bb8603eac032650b0a0e36afa5c02cbeb0336f1158f72a8b247c\"" Jan 30 14:11:45.076222 systemd[1]: Started cri-containerd-03c65d28ff94bb8603eac032650b0a0e36afa5c02cbeb0336f1158f72a8b247c.scope - libcontainer container 03c65d28ff94bb8603eac032650b0a0e36afa5c02cbeb0336f1158f72a8b247c. Jan 30 14:11:45.092277 containerd[1824]: time="2025-01-30T14:11:45.092219181Z" level=info msg="StartContainer for \"03c65d28ff94bb8603eac032650b0a0e36afa5c02cbeb0336f1158f72a8b247c\" returns successfully" Jan 30 14:11:45.413316 containerd[1824]: time="2025-01-30T14:11:45.413292610Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:45.413515 containerd[1824]: time="2025-01-30T14:11:45.413497888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:11:45.414693 containerd[1824]: time="2025-01-30T14:11:45.414651142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 376.133304ms" Jan 30 14:11:45.414693 containerd[1824]: time="2025-01-30T14:11:45.414670667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 14:11:45.416016 containerd[1824]: time="2025-01-30T14:11:45.415967172Z" level=info msg="CreateContainer within sandbox \"3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:11:45.421674 containerd[1824]: time="2025-01-30T14:11:45.421630778Z" level=info msg="CreateContainer within sandbox \"3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"db43dfca71f2e012ebc27d494686417b2d32146058012624d123a6072d67b6e7\"" Jan 30 14:11:45.422010 containerd[1824]: time="2025-01-30T14:11:45.421999696Z" level=info msg="StartContainer for \"db43dfca71f2e012ebc27d494686417b2d32146058012624d123a6072d67b6e7\"" Jan 30 14:11:45.442337 systemd[1]: Started cri-containerd-db43dfca71f2e012ebc27d494686417b2d32146058012624d123a6072d67b6e7.scope - libcontainer container db43dfca71f2e012ebc27d494686417b2d32146058012624d123a6072d67b6e7. Jan 30 14:11:45.475667 containerd[1824]: time="2025-01-30T14:11:45.475615473Z" level=info msg="StartContainer for \"db43dfca71f2e012ebc27d494686417b2d32146058012624d123a6072d67b6e7\" returns successfully" Jan 30 14:11:45.892339 kubelet[3251]: I0130 14:11:45.892281 3251 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:11:45.892339 kubelet[3251]: I0130 14:11:45.892354 3251 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:11:46.000242 kubelet[3251]: I0130 14:11:46.000154 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bb9cfb479-2xzmg" podStartSLOduration=23.624144446 podStartE2EDuration="26.000091223s" podCreationTimestamp="2025-01-30 14:11:20 +0000 UTC" firstStartedPulling="2025-01-30 14:11:43.039084365 +0000 UTC m=+41.238530243" lastFinishedPulling="2025-01-30 14:11:45.415031135 +0000 UTC m=+43.614477020" observedRunningTime="2025-01-30 14:11:45.999757233 +0000 UTC m=+44.199203207" watchObservedRunningTime="2025-01-30 14:11:46.000091223 +0000 UTC m=+44.199537201" Jan 30 14:11:46.015560 kubelet[3251]: I0130 14:11:46.015489 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gwmxx" podStartSLOduration=20.976617571 podStartE2EDuration="26.015466175s" podCreationTimestamp="2025-01-30 14:11:20 +0000 UTC" firstStartedPulling="2025-01-30 14:11:39.999604302 +0000 UTC m=+38.199050180" lastFinishedPulling="2025-01-30 14:11:45.038452906 +0000 UTC m=+43.237898784" observedRunningTime="2025-01-30 14:11:46.015198311 +0000 UTC m=+44.214644250" watchObservedRunningTime="2025-01-30 14:11:46.015466175 +0000 UTC m=+44.214912086" Jan 30 14:11:46.990665 kubelet[3251]: I0130 14:11:46.990561 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:11:52.559373 kubelet[3251]: I0130 14:11:52.559264 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:12:01.843571 containerd[1824]: time="2025-01-30T14:12:01.843373049Z" level=info msg="StopPodSandbox for \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\"" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.899 [WARNING][6568] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f648eb2-8d49-44e7-a889-00115811af73", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd", Pod:"csi-node-driver-gwmxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3bb11ef6c47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.899 [INFO][6568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.899 [INFO][6568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" iface="eth0" netns="" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.899 [INFO][6568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.899 [INFO][6568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.914 [INFO][6583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.914 [INFO][6583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.914 [INFO][6583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.919 [WARNING][6583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.919 [INFO][6583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.920 [INFO][6583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:01.922341 containerd[1824]: 2025-01-30 14:12:01.921 [INFO][6568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.922786 containerd[1824]: time="2025-01-30T14:12:01.922378074Z" level=info msg="TearDown network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\" successfully" Jan 30 14:12:01.922786 containerd[1824]: time="2025-01-30T14:12:01.922398773Z" level=info msg="StopPodSandbox for \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\" returns successfully" Jan 30 14:12:01.922836 containerd[1824]: time="2025-01-30T14:12:01.922820547Z" level=info msg="RemovePodSandbox for \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\"" Jan 30 14:12:01.922861 containerd[1824]: time="2025-01-30T14:12:01.922840236Z" level=info msg="Forcibly stopping sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\"" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.944 [WARNING][6615] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f648eb2-8d49-44e7-a889-00115811af73", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"2949a378e3965c086b3e362f3569a8b7d8ddfddcb57a4247cc277af510cb63dd", Pod:"csi-node-driver-gwmxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3bb11ef6c47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.944 [INFO][6615] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.944 [INFO][6615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" iface="eth0" netns="" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.944 [INFO][6615] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.944 [INFO][6615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.957 [INFO][6631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.957 [INFO][6631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.957 [INFO][6631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.961 [WARNING][6631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.961 [INFO][6631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" HandleID="k8s-pod-network.308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Workload="ci--4081.3.0--a--feecaa3039-k8s-csi--node--driver--gwmxx-eth0" Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.962 [INFO][6631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:01.963811 containerd[1824]: 2025-01-30 14:12:01.963 [INFO][6615] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010" Jan 30 14:12:01.963811 containerd[1824]: time="2025-01-30T14:12:01.963809615Z" level=info msg="TearDown network for sandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\" successfully" Jan 30 14:12:01.965314 containerd[1824]: time="2025-01-30T14:12:01.965273518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:12:01.965314 containerd[1824]: time="2025-01-30T14:12:01.965305205Z" level=info msg="RemovePodSandbox \"308d861143a4508fbf9d60d1449b5f882812e6dbbc75770b1219737e59919010\" returns successfully" Jan 30 14:12:01.965615 containerd[1824]: time="2025-01-30T14:12:01.965585441Z" level=info msg="StopPodSandbox for \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\"" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.983 [WARNING][6658] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0", GenerateName:"calico-kube-controllers-74c89cd695-", Namespace:"calico-system", SelfLink:"", UID:"d760cfdd-6a44-4898-b2ca-8056d278d0dd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c89cd695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de", Pod:"calico-kube-controllers-74c89cd695-jzlrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccf70109eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.983 [INFO][6658] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.983 [INFO][6658] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" iface="eth0" netns="" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.983 [INFO][6658] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.983 [INFO][6658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.993 [INFO][6670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.993 [INFO][6670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.993 [INFO][6670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.996 [WARNING][6670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.997 [INFO][6670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.997 [INFO][6670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:01.999066 containerd[1824]: 2025-01-30 14:12:01.998 [INFO][6658] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:01.999378 containerd[1824]: time="2025-01-30T14:12:01.999088839Z" level=info msg="TearDown network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\" successfully" Jan 30 14:12:01.999378 containerd[1824]: time="2025-01-30T14:12:01.999108930Z" level=info msg="StopPodSandbox for \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\" returns successfully" Jan 30 14:12:01.999378 containerd[1824]: time="2025-01-30T14:12:01.999360628Z" level=info msg="RemovePodSandbox for \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\"" Jan 30 14:12:01.999378 containerd[1824]: time="2025-01-30T14:12:01.999375551Z" level=info msg="Forcibly stopping sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\"" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.016 [WARNING][6699] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0", GenerateName:"calico-kube-controllers-74c89cd695-", Namespace:"calico-system", SelfLink:"", UID:"d760cfdd-6a44-4898-b2ca-8056d278d0dd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c89cd695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"8504e6c831404d4c989f75e9ef7dd0d00d39a2dd0123062da46b2d65ffaae4de", Pod:"calico-kube-controllers-74c89cd695-jzlrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccf70109eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.017 [INFO][6699] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.017 [INFO][6699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" iface="eth0" netns="" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.017 [INFO][6699] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.017 [INFO][6699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.026 [INFO][6713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.026 [INFO][6713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.027 [INFO][6713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.030 [WARNING][6713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.030 [INFO][6713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" HandleID="k8s-pod-network.800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--kube--controllers--74c89cd695--jzlrt-eth0" Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.031 [INFO][6713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.032489 containerd[1824]: 2025-01-30 14:12:02.031 [INFO][6699] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836" Jan 30 14:12:02.032787 containerd[1824]: time="2025-01-30T14:12:02.032506864Z" level=info msg="TearDown network for sandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\" successfully" Jan 30 14:12:02.033935 containerd[1824]: time="2025-01-30T14:12:02.033923044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:12:02.033957 containerd[1824]: time="2025-01-30T14:12:02.033946488Z" level=info msg="RemovePodSandbox \"800b9ae5166c46b3c79f24f47fd0a4123a3fa7cd86a0f2638ddda171b6eeb836\" returns successfully" Jan 30 14:12:02.034084 containerd[1824]: time="2025-01-30T14:12:02.034072805Z" level=info msg="StopPodSandbox for \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\"" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.051 [WARNING][6741] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"a767c38c-841b-4ecb-8ffb-49c977092016", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593", Pod:"calico-apiserver-5bb9cfb479-dvdnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2b7c8176a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.051 [INFO][6741] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.051 [INFO][6741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" iface="eth0" netns="" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.051 [INFO][6741] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.051 [INFO][6741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.061 [INFO][6755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.061 [INFO][6755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.061 [INFO][6755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.064 [WARNING][6755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.064 [INFO][6755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.065 [INFO][6755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.066797 containerd[1824]: 2025-01-30 14:12:02.066 [INFO][6741] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.066797 containerd[1824]: time="2025-01-30T14:12:02.066791790Z" level=info msg="TearDown network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\" successfully" Jan 30 14:12:02.067153 containerd[1824]: time="2025-01-30T14:12:02.066808303Z" level=info msg="StopPodSandbox for \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\" returns successfully" Jan 30 14:12:02.067153 containerd[1824]: time="2025-01-30T14:12:02.067072321Z" level=info msg="RemovePodSandbox for \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\"" Jan 30 14:12:02.067153 containerd[1824]: time="2025-01-30T14:12:02.067086514Z" level=info msg="Forcibly stopping sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\"" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.084 [WARNING][6781] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"a767c38c-841b-4ecb-8ffb-49c977092016", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"79def804a2020deb49985849ebb8508f956c908422cb1ee7488db6f4f8165593", Pod:"calico-apiserver-5bb9cfb479-dvdnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2b7c8176a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.085 [INFO][6781] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.085 [INFO][6781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" iface="eth0" netns="" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.085 [INFO][6781] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.085 [INFO][6781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.097 [INFO][6795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.097 [INFO][6795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.097 [INFO][6795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.100 [WARNING][6795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.100 [INFO][6795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" HandleID="k8s-pod-network.1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--dvdnb-eth0" Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.101 [INFO][6795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.102968 containerd[1824]: 2025-01-30 14:12:02.102 [INFO][6781] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549" Jan 30 14:12:02.102968 containerd[1824]: time="2025-01-30T14:12:02.102914439Z" level=info msg="TearDown network for sandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\" successfully" Jan 30 14:12:02.104264 containerd[1824]: time="2025-01-30T14:12:02.104221315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:12:02.104264 containerd[1824]: time="2025-01-30T14:12:02.104248811Z" level=info msg="RemovePodSandbox \"1c97e7985be92bb4ee9a4bf0b8a5f5b30b26136ba3a4dec8ec1bbbb2e0bd2549\" returns successfully" Jan 30 14:12:02.104548 containerd[1824]: time="2025-01-30T14:12:02.104507424Z" level=info msg="StopPodSandbox for \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\"" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.121 [WARNING][6822] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d87a2984-6eab-4a62-8e08-b01dcb68024f", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0", Pod:"coredns-7db6d8ff4d-zcffb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e46fb8ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.122 [INFO][6822] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.122 [INFO][6822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" iface="eth0" netns="" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.122 [INFO][6822] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.122 [INFO][6822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.132 [INFO][6836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.132 [INFO][6836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.132 [INFO][6836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.135 [WARNING][6836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.135 [INFO][6836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.136 [INFO][6836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.138138 containerd[1824]: 2025-01-30 14:12:02.137 [INFO][6822] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.138521 containerd[1824]: time="2025-01-30T14:12:02.138158526Z" level=info msg="TearDown network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\" successfully" Jan 30 14:12:02.138521 containerd[1824]: time="2025-01-30T14:12:02.138177188Z" level=info msg="StopPodSandbox for \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\" returns successfully" Jan 30 14:12:02.138521 containerd[1824]: time="2025-01-30T14:12:02.138489217Z" level=info msg="RemovePodSandbox for \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\"" Jan 30 14:12:02.138521 containerd[1824]: time="2025-01-30T14:12:02.138513982Z" level=info msg="Forcibly stopping sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\"" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.158 [WARNING][6863] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d87a2984-6eab-4a62-8e08-b01dcb68024f", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"edd9fb950370a9bfb8609639dd6ff0bcd0337a5435bb8c8f664432a8592b8de0", Pod:"coredns-7db6d8ff4d-zcffb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e46fb8ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.158 [INFO][6863] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.158 [INFO][6863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" iface="eth0" netns="" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.158 [INFO][6863] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.158 [INFO][6863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.170 [INFO][6878] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.170 [INFO][6878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.170 [INFO][6878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.173 [WARNING][6878] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.173 [INFO][6878] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" HandleID="k8s-pod-network.c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--zcffb-eth0" Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.174 [INFO][6878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.176224 containerd[1824]: 2025-01-30 14:12:02.175 [INFO][6863] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff" Jan 30 14:12:02.176224 containerd[1824]: time="2025-01-30T14:12:02.176221615Z" level=info msg="TearDown network for sandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\" successfully" Jan 30 14:12:02.177580 containerd[1824]: time="2025-01-30T14:12:02.177536884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:12:02.177580 containerd[1824]: time="2025-01-30T14:12:02.177576665Z" level=info msg="RemovePodSandbox \"c90c2322cdd4e4ea521c69480d46dc893e9307cf74ccacab38b714665032d4ff\" returns successfully" Jan 30 14:12:02.177889 containerd[1824]: time="2025-01-30T14:12:02.177854070Z" level=info msg="StopPodSandbox for \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\"" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.195 [WARNING][6907] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b058520-fe77-4288-8183-1294854fc085", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1", Pod:"coredns-7db6d8ff4d-84mks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1fdf8d8e33", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.196 [INFO][6907] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.196 [INFO][6907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" iface="eth0" netns="" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.196 [INFO][6907] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.196 [INFO][6907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.206 [INFO][6918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.206 [INFO][6918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.206 [INFO][6918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.210 [WARNING][6918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.210 [INFO][6918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.211 [INFO][6918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.212495 containerd[1824]: 2025-01-30 14:12:02.211 [INFO][6907] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.212812 containerd[1824]: time="2025-01-30T14:12:02.212495670Z" level=info msg="TearDown network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\" successfully" Jan 30 14:12:02.212812 containerd[1824]: time="2025-01-30T14:12:02.212512486Z" level=info msg="StopPodSandbox for \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\" returns successfully" Jan 30 14:12:02.212812 containerd[1824]: time="2025-01-30T14:12:02.212791367Z" level=info msg="RemovePodSandbox for \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\"" Jan 30 14:12:02.212812 containerd[1824]: time="2025-01-30T14:12:02.212807923Z" level=info msg="Forcibly stopping sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\"" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.235 [WARNING][6947] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b058520-fe77-4288-8183-1294854fc085", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"fb936e7f9b1000ca6492bdb6f5fb80fc68bdbc40c89c83fd44c0c22fe80a1ea1", Pod:"coredns-7db6d8ff4d-84mks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1fdf8d8e33", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.235 [INFO][6947] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.235 [INFO][6947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" iface="eth0" netns="" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.235 [INFO][6947] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.235 [INFO][6947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.247 [INFO][6960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.247 [INFO][6960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.247 [INFO][6960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.251 [WARNING][6960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.251 [INFO][6960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" HandleID="k8s-pod-network.c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Workload="ci--4081.3.0--a--feecaa3039-k8s-coredns--7db6d8ff4d--84mks-eth0" Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.253 [INFO][6960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.254723 containerd[1824]: 2025-01-30 14:12:02.253 [INFO][6947] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956" Jan 30 14:12:02.255140 containerd[1824]: time="2025-01-30T14:12:02.254756642Z" level=info msg="TearDown network for sandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\" successfully" Jan 30 14:12:02.257270 containerd[1824]: time="2025-01-30T14:12:02.257255690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:12:02.257311 containerd[1824]: time="2025-01-30T14:12:02.257285790Z" level=info msg="RemovePodSandbox \"c01f5ce4c3d0547add680e36c36d566da1dc738b18895e4499dda54306998956\" returns successfully" Jan 30 14:12:02.257601 containerd[1824]: time="2025-01-30T14:12:02.257591833Z" level=info msg="StopPodSandbox for \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\"" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.281 [WARNING][6989] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"5753cc97-d110-42c7-b5f2-97689b20b507", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1", Pod:"calico-apiserver-5bb9cfb479-2xzmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid95b6b67f91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.281 [INFO][6989] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.281 [INFO][6989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" iface="eth0" netns="" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.281 [INFO][6989] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.281 [INFO][6989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.326 [INFO][7002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.326 [INFO][7002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.326 [INFO][7002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.336 [WARNING][7002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.336 [INFO][7002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.339 [INFO][7002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.343239 containerd[1824]: 2025-01-30 14:12:02.341 [INFO][6989] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.344246 containerd[1824]: time="2025-01-30T14:12:02.343307977Z" level=info msg="TearDown network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\" successfully" Jan 30 14:12:02.344246 containerd[1824]: time="2025-01-30T14:12:02.343357870Z" level=info msg="StopPodSandbox for \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\" returns successfully" Jan 30 14:12:02.344246 containerd[1824]: time="2025-01-30T14:12:02.344061326Z" level=info msg="RemovePodSandbox for \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\"" Jan 30 14:12:02.344246 containerd[1824]: time="2025-01-30T14:12:02.344129961Z" level=info msg="Forcibly stopping sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\"" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.372 [WARNING][7034] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0", GenerateName:"calico-apiserver-5bb9cfb479-", Namespace:"calico-apiserver", SelfLink:"", UID:"5753cc97-d110-42c7-b5f2-97689b20b507", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb9cfb479", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-feecaa3039", ContainerID:"3c46e7fa370c616d2d6dcf7fdf49dea5109f8626480e8100de53e253b939b3d1", Pod:"calico-apiserver-5bb9cfb479-2xzmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid95b6b67f91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.372 [INFO][7034] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.372 [INFO][7034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" iface="eth0" netns="" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.372 [INFO][7034] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.372 [INFO][7034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.383 [INFO][7048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.383 [INFO][7048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.383 [INFO][7048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.386 [WARNING][7048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.386 [INFO][7048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" HandleID="k8s-pod-network.114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Workload="ci--4081.3.0--a--feecaa3039-k8s-calico--apiserver--5bb9cfb479--2xzmg-eth0" Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.387 [INFO][7048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:12:02.389286 containerd[1824]: 2025-01-30 14:12:02.388 [INFO][7034] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9" Jan 30 14:12:02.389286 containerd[1824]: time="2025-01-30T14:12:02.389239915Z" level=info msg="TearDown network for sandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\" successfully" Jan 30 14:12:02.390551 containerd[1824]: time="2025-01-30T14:12:02.390506970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:12:02.390551 containerd[1824]: time="2025-01-30T14:12:02.390536784Z" level=info msg="RemovePodSandbox \"114337981e323b43eafd8e0fa8c88519910b9fc36052d5866b58bbf7ad5be5f9\" returns successfully" Jan 30 14:12:22.821729 kubelet[3251]: I0130 14:12:22.821608 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:16.726073 systemd[1]: Started sshd@10-139.178.70.199:22-218.92.0.155:31478.service - OpenSSH per-connection server daemon (218.92.0.155:31478). Jan 30 14:13:17.818614 sshd[7228]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:13:19.327488 sshd[7226]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:13:19.625448 sshd[7229]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:13:22.408751 sshd[7226]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:13:22.705101 sshd[7230]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:13:24.565373 sshd[7226]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:13:24.716755 sshd[7226]: Received disconnect from 218.92.0.155 port 31478:11: [preauth] Jan 30 14:13:24.716755 sshd[7226]: Disconnected from authenticating user root 218.92.0.155 port 31478 [preauth] Jan 30 14:13:24.721476 systemd[1]: sshd@10-139.178.70.199:22-218.92.0.155:31478.service: Deactivated successfully. Jan 30 14:13:36.633456 update_engine[1811]: I20250130 14:13:36.633224 1811 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 14:13:36.633456 update_engine[1811]: I20250130 14:13:36.633322 1811 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 14:13:36.634477 update_engine[1811]: I20250130 14:13:36.633688 1811 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 14:13:36.634822 update_engine[1811]: I20250130 14:13:36.634725 1811 omaha_request_params.cc:62] Current group set to lts Jan 30 14:13:36.635017 update_engine[1811]: I20250130 14:13:36.634969 1811 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 14:13:36.635017 update_engine[1811]: I20250130 14:13:36.635010 1811 update_attempter.cc:643] Scheduling an action processor start. Jan 30 14:13:36.635251 update_engine[1811]: I20250130 14:13:36.635047 1811 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:13:36.635251 update_engine[1811]: I20250130 14:13:36.635146 1811 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 14:13:36.635446 update_engine[1811]: I20250130 14:13:36.635303 1811 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:13:36.635446 update_engine[1811]: I20250130 14:13:36.635332 1811 omaha_request_action.cc:272] Request: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: Jan 30 14:13:36.635446 update_engine[1811]: I20250130 14:13:36.635349 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:13:36.636352 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 14:13:36.638609 update_engine[1811]: I20250130 14:13:36.638571 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:13:36.638774 update_engine[1811]: I20250130 14:13:36.638734 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:13:36.639533 update_engine[1811]: E20250130 14:13:36.639490 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:13:36.639533 update_engine[1811]: I20250130 14:13:36.639523 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 14:13:46.542689 update_engine[1811]: I20250130 14:13:46.542521 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:13:46.543739 update_engine[1811]: I20250130 14:13:46.543064 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:13:46.543739 update_engine[1811]: I20250130 14:13:46.543623 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:13:46.544490 update_engine[1811]: E20250130 14:13:46.544375 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:13:46.544676 update_engine[1811]: I20250130 14:13:46.544519 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 14:13:56.542784 update_engine[1811]: I20250130 14:13:56.542616 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:13:56.543824 update_engine[1811]: I20250130 14:13:56.543206 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:13:56.543824 update_engine[1811]: I20250130 14:13:56.543748 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:13:56.544445 update_engine[1811]: E20250130 14:13:56.544338 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:13:56.544632 update_engine[1811]: I20250130 14:13:56.544463 1811 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 14:14:06.542760 update_engine[1811]: I20250130 14:14:06.542596 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:14:06.543745 update_engine[1811]: I20250130 14:14:06.543238 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:14:06.543864 update_engine[1811]: I20250130 14:14:06.543754 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:14:06.544910 update_engine[1811]: E20250130 14:14:06.544792 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:14:06.545149 update_engine[1811]: I20250130 14:14:06.544929 1811 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:14:06.545149 update_engine[1811]: I20250130 14:14:06.544957 1811 omaha_request_action.cc:617] Omaha request response: Jan 30 14:14:06.545371 update_engine[1811]: E20250130 14:14:06.545140 1811 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 14:14:06.545371 update_engine[1811]: I20250130 14:14:06.545192 1811 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 14:14:06.545371 update_engine[1811]: I20250130 14:14:06.545210 1811 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:14:06.545371 update_engine[1811]: I20250130 14:14:06.545225 1811 update_attempter.cc:306] Processing Done. Jan 30 14:14:06.545371 update_engine[1811]: E20250130 14:14:06.545256 1811 update_attempter.cc:619] Update failed. Jan 30 14:14:06.545371 update_engine[1811]: I20250130 14:14:06.545272 1811 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 14:14:06.545371 update_engine[1811]: I20250130 14:14:06.545287 1811 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 14:14:06.545371 update_engine[1811]: I20250130 14:14:06.545303 1811 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 14:14:06.546053 update_engine[1811]: I20250130 14:14:06.545456 1811 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:14:06.546053 update_engine[1811]: I20250130 14:14:06.545516 1811 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:14:06.546053 update_engine[1811]: I20250130 14:14:06.545535 1811 omaha_request_action.cc:272] Request: Jan 30 14:14:06.546053 update_engine[1811]: Jan 30 14:14:06.546053 update_engine[1811]: Jan 30 14:14:06.546053 update_engine[1811]: Jan 30 14:14:06.546053 update_engine[1811]: Jan 30 14:14:06.546053 update_engine[1811]: Jan 30 14:14:06.546053 update_engine[1811]: Jan 30 14:14:06.546053 update_engine[1811]: I20250130 14:14:06.545552 1811 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:14:06.546053 update_engine[1811]: I20250130 14:14:06.545985 1811 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:14:06.547036 update_engine[1811]: I20250130 14:14:06.546497 1811 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:14:06.547158 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 14:14:06.547797 update_engine[1811]: E20250130 14:14:06.547185 1811 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:14:06.547797 update_engine[1811]: I20250130 14:14:06.547314 1811 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:14:06.547797 update_engine[1811]: I20250130 14:14:06.547343 1811 omaha_request_action.cc:617] Omaha request response: Jan 30 14:14:06.547797 update_engine[1811]: I20250130 14:14:06.547361 1811 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:14:06.547797 update_engine[1811]: I20250130 14:14:06.547377 1811 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:14:06.547797 update_engine[1811]: I20250130 14:14:06.547391 1811 update_attempter.cc:306] Processing Done. Jan 30 14:14:06.547797 update_engine[1811]: I20250130 14:14:06.547408 1811 update_attempter.cc:310] Error event sent. Jan 30 14:14:06.547797 update_engine[1811]: I20250130 14:14:06.547432 1811 update_check_scheduler.cc:74] Next update check in 46m10s Jan 30 14:14:06.548509 locksmithd[1859]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 14:15:21.763378 systemd[1]: Started sshd@11-139.178.70.199:22-218.92.0.155:62013.service - OpenSSH per-connection server daemon (218.92.0.155:62013). Jan 30 14:15:22.906844 sshd[7521]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:15:24.240314 sshd[7500]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:15:24.547051 sshd[7522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:15:26.155429 sshd[7500]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:15:26.459601 sshd[7548]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:15:28.676214 sshd[7500]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:15:28.827014 sshd[7500]: Received disconnect from 218.92.0.155 port 62013:11: [preauth] Jan 30 14:15:28.827014 sshd[7500]: Disconnected from authenticating user root 218.92.0.155 port 62013 [preauth] Jan 30 14:15:28.830752 systemd[1]: sshd@11-139.178.70.199:22-218.92.0.155:62013.service: Deactivated successfully. Jan 30 14:16:50.360817 systemd[1]: Started sshd@12-139.178.70.199:22-147.75.109.163:37192.service - OpenSSH per-connection server daemon (147.75.109.163:37192). Jan 30 14:16:50.411768 sshd[7718]: Accepted publickey for core from 147.75.109.163 port 37192 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:16:50.413321 sshd[7718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:16:50.418623 systemd-logind[1806]: New session 12 of user core. Jan 30 14:16:50.436525 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:16:50.573819 sshd[7718]: pam_unix(sshd:session): session closed for user core Jan 30 14:16:50.575524 systemd[1]: sshd@12-139.178.70.199:22-147.75.109.163:37192.service: Deactivated successfully. Jan 30 14:16:50.576465 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:16:50.577050 systemd-logind[1806]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:16:50.577647 systemd-logind[1806]: Removed session 12. Jan 30 14:16:55.607604 systemd[1]: Started sshd@13-139.178.70.199:22-147.75.109.163:37208.service - OpenSSH per-connection server daemon (147.75.109.163:37208). Jan 30 14:16:55.670718 sshd[7792]: Accepted publickey for core from 147.75.109.163 port 37208 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:16:55.672310 sshd[7792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:16:55.677579 systemd-logind[1806]: New session 13 of user core. Jan 30 14:16:55.701506 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:16:55.850314 sshd[7792]: pam_unix(sshd:session): session closed for user core Jan 30 14:16:55.852237 systemd[1]: sshd@13-139.178.70.199:22-147.75.109.163:37208.service: Deactivated successfully. Jan 30 14:16:55.853431 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:16:55.854371 systemd-logind[1806]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:16:55.855087 systemd-logind[1806]: Removed session 13. Jan 30 14:17:00.874012 systemd[1]: Started sshd@14-139.178.70.199:22-147.75.109.163:43216.service - OpenSSH per-connection server daemon (147.75.109.163:43216). Jan 30 14:17:00.906860 sshd[7827]: Accepted publickey for core from 147.75.109.163 port 43216 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:00.907606 sshd[7827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:00.910459 systemd-logind[1806]: New session 14 of user core. Jan 30 14:17:00.919360 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:17:01.005031 sshd[7827]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:01.028077 systemd[1]: sshd@14-139.178.70.199:22-147.75.109.163:43216.service: Deactivated successfully. Jan 30 14:17:01.029091 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:17:01.029991 systemd-logind[1806]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:17:01.030836 systemd[1]: Started sshd@15-139.178.70.199:22-147.75.109.163:43224.service - OpenSSH per-connection server daemon (147.75.109.163:43224). Jan 30 14:17:01.031388 systemd-logind[1806]: Removed session 14. Jan 30 14:17:01.061562 sshd[7854]: Accepted publickey for core from 147.75.109.163 port 43224 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:01.062451 sshd[7854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:01.065954 systemd-logind[1806]: New session 15 of user core. Jan 30 14:17:01.083372 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:17:01.198335 sshd[7854]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:01.212467 systemd[1]: sshd@15-139.178.70.199:22-147.75.109.163:43224.service: Deactivated successfully. Jan 30 14:17:01.213633 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:17:01.214547 systemd-logind[1806]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:17:01.215666 systemd[1]: Started sshd@16-139.178.70.199:22-147.75.109.163:43236.service - OpenSSH per-connection server daemon (147.75.109.163:43236). Jan 30 14:17:01.216305 systemd-logind[1806]: Removed session 15. Jan 30 14:17:01.250019 sshd[7878]: Accepted publickey for core from 147.75.109.163 port 43236 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:01.251064 sshd[7878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:01.254871 systemd-logind[1806]: New session 16 of user core. Jan 30 14:17:01.269665 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:17:01.401795 sshd[7878]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:01.403686 systemd[1]: sshd@16-139.178.70.199:22-147.75.109.163:43236.service: Deactivated successfully. Jan 30 14:17:01.404558 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:17:01.404964 systemd-logind[1806]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:17:01.405569 systemd-logind[1806]: Removed session 16. Jan 30 14:17:06.434285 systemd[1]: Started sshd@17-139.178.70.199:22-147.75.109.163:43252.service - OpenSSH per-connection server daemon (147.75.109.163:43252). Jan 30 14:17:06.462281 sshd[7916]: Accepted publickey for core from 147.75.109.163 port 43252 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:06.463079 sshd[7916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:06.466013 systemd-logind[1806]: New session 17 of user core. Jan 30 14:17:06.474290 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:17:06.558813 sshd[7916]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:06.576784 systemd[1]: sshd@17-139.178.70.199:22-147.75.109.163:43252.service: Deactivated successfully. Jan 30 14:17:06.577587 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:17:06.578308 systemd-logind[1806]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:17:06.579036 systemd[1]: Started sshd@18-139.178.70.199:22-147.75.109.163:43256.service - OpenSSH per-connection server daemon (147.75.109.163:43256). Jan 30 14:17:06.579498 systemd-logind[1806]: Removed session 17. Jan 30 14:17:06.610380 sshd[7941]: Accepted publickey for core from 147.75.109.163 port 43256 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:06.611626 sshd[7941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:06.616202 systemd-logind[1806]: New session 18 of user core. Jan 30 14:17:06.630495 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:17:06.733940 sshd[7941]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:06.754276 systemd[1]: sshd@18-139.178.70.199:22-147.75.109.163:43256.service: Deactivated successfully. Jan 30 14:17:06.755331 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:17:06.756345 systemd-logind[1806]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:17:06.757503 systemd[1]: Started sshd@19-139.178.70.199:22-147.75.109.163:43266.service - OpenSSH per-connection server daemon (147.75.109.163:43266). Jan 30 14:17:06.758230 systemd-logind[1806]: Removed session 18. Jan 30 14:17:06.794381 sshd[7964]: Accepted publickey for core from 147.75.109.163 port 43266 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:06.795657 sshd[7964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:06.800668 systemd-logind[1806]: New session 19 of user core. Jan 30 14:17:06.814476 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:17:07.906172 sshd[7964]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:07.915823 systemd[1]: sshd@19-139.178.70.199:22-147.75.109.163:43266.service: Deactivated successfully. Jan 30 14:17:07.916662 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:17:07.917371 systemd-logind[1806]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:17:07.918151 systemd[1]: Started sshd@20-139.178.70.199:22-147.75.109.163:46858.service - OpenSSH per-connection server daemon (147.75.109.163:46858). Jan 30 14:17:07.918650 systemd-logind[1806]: Removed session 19. Jan 30 14:17:07.947628 sshd[7994]: Accepted publickey for core from 147.75.109.163 port 46858 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:07.948453 sshd[7994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:07.951518 systemd-logind[1806]: New session 20 of user core. Jan 30 14:17:07.970298 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:17:08.151227 sshd[7994]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:08.170078 systemd[1]: sshd@20-139.178.70.199:22-147.75.109.163:46858.service: Deactivated successfully. Jan 30 14:17:08.171190 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:17:08.172040 systemd-logind[1806]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:17:08.172839 systemd[1]: Started sshd@21-139.178.70.199:22-147.75.109.163:46872.service - OpenSSH per-connection server daemon (147.75.109.163:46872). Jan 30 14:17:08.173493 systemd-logind[1806]: Removed session 20. Jan 30 14:17:08.206956 sshd[8020]: Accepted publickey for core from 147.75.109.163 port 46872 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:08.208430 sshd[8020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:08.214079 systemd-logind[1806]: New session 21 of user core. Jan 30 14:17:08.232603 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:17:08.363879 sshd[8020]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:08.365452 systemd[1]: sshd@21-139.178.70.199:22-147.75.109.163:46872.service: Deactivated successfully. Jan 30 14:17:08.366387 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:17:08.367023 systemd-logind[1806]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:17:08.367666 systemd-logind[1806]: Removed session 21. Jan 30 14:17:10.962320 systemd[1]: Started sshd@22-139.178.70.199:22-218.92.0.155:46891.service - OpenSSH per-connection server daemon (218.92.0.155:46891). Jan 30 14:17:12.031665 sshd[8053]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:17:13.384488 systemd[1]: Started sshd@23-139.178.70.199:22-147.75.109.163:46888.service - OpenSSH per-connection server daemon (147.75.109.163:46888). Jan 30 14:17:13.413707 sshd[8055]: Accepted publickey for core from 147.75.109.163 port 46888 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:13.414584 sshd[8055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:13.417768 systemd-logind[1806]: New session 22 of user core. Jan 30 14:17:13.431302 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:17:13.522813 sshd[8055]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:13.524307 systemd[1]: sshd@23-139.178.70.199:22-147.75.109.163:46888.service: Deactivated successfully. Jan 30 14:17:13.525218 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:17:13.525936 systemd-logind[1806]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:17:13.526661 systemd-logind[1806]: Removed session 22. Jan 30 14:17:13.997219 sshd[8047]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:17:14.289504 sshd[8080]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:17:16.666160 sshd[8047]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:17:16.960232 sshd[8083]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 14:17:18.557334 systemd[1]: Started sshd@24-139.178.70.199:22-147.75.109.163:47780.service - OpenSSH per-connection server daemon (147.75.109.163:47780). Jan 30 14:17:18.586149 sshd[8085]: Accepted publickey for core from 147.75.109.163 port 47780 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:18.589564 sshd[8085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:18.600910 systemd-logind[1806]: New session 23 of user core. Jan 30 14:17:18.609404 sshd[8047]: PAM: Permission denied for root from 218.92.0.155 Jan 30 14:17:18.616622 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:17:18.708424 sshd[8085]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:18.710050 systemd[1]: sshd@24-139.178.70.199:22-147.75.109.163:47780.service: Deactivated successfully. Jan 30 14:17:18.711022 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:17:18.711754 systemd-logind[1806]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:17:18.712477 systemd-logind[1806]: Removed session 23. Jan 30 14:17:18.755858 sshd[8047]: Received disconnect from 218.92.0.155 port 46891:11: [preauth] Jan 30 14:17:18.755858 sshd[8047]: Disconnected from authenticating user root 218.92.0.155 port 46891 [preauth] Jan 30 14:17:18.757228 systemd[1]: sshd@22-139.178.70.199:22-218.92.0.155:46891.service: Deactivated successfully. Jan 30 14:17:23.728613 systemd[1]: Started sshd@25-139.178.70.199:22-147.75.109.163:47790.service - OpenSSH per-connection server daemon (147.75.109.163:47790). Jan 30 14:17:23.758321 sshd[8132]: Accepted publickey for core from 147.75.109.163 port 47790 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:17:23.759117 sshd[8132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:17:23.762377 systemd-logind[1806]: New session 24 of user core. Jan 30 14:17:23.773567 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:17:23.866342 sshd[8132]: pam_unix(sshd:session): session closed for user core Jan 30 14:17:23.868019 systemd[1]: sshd@25-139.178.70.199:22-147.75.109.163:47790.service: Deactivated successfully. Jan 30 14:17:23.869012 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:17:23.869753 systemd-logind[1806]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:17:23.870473 systemd-logind[1806]: Removed session 24.