Jan 30 14:20:38.002681 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 30 14:20:38.002695 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:20:38.002701 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:20:38.002706 kernel: BIOS-provided physical RAM map: Jan 30 14:20:38.002710 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 14:20:38.002714 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 14:20:38.002719 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 14:20:38.002723 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 14:20:38.002727 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 14:20:38.002731 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b10fff] usable Jan 30 14:20:38.002735 kernel: BIOS-e820: [mem 0x0000000081b11000-0x0000000081b11fff] ACPI NVS Jan 30 14:20:38.002740 kernel: BIOS-e820: [mem 0x0000000081b12000-0x0000000081b12fff] reserved Jan 30 14:20:38.002744 kernel: BIOS-e820: [mem 0x0000000081b13000-0x000000008afccfff] usable Jan 30 14:20:38.002748 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 14:20:38.002753 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 14:20:38.002758 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 14:20:38.002763 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 14:20:38.002768 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 14:20:38.002772 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 14:20:38.002777 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 14:20:38.002781 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 14:20:38.002786 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 14:20:38.002790 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 14:20:38.002794 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 14:20:38.002799 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 14:20:38.002803 kernel: NX (Execute Disable) protection: active Jan 30 14:20:38.002808 kernel: APIC: Static calls initialized Jan 30 14:20:38.002812 kernel: SMBIOS 3.2.1 present. Jan 30 14:20:38.002818 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 14:20:38.002823 kernel: tsc: Detected 3400.000 MHz processor Jan 30 14:20:38.002827 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 14:20:38.002832 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:20:38.002837 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:20:38.002842 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 14:20:38.002846 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 14:20:38.002851 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:20:38.002856 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 14:20:38.002861 kernel: Using GB pages for direct mapping Jan 30 14:20:38.002866 kernel: ACPI: Early table checksum verification disabled Jan 30 14:20:38.002871 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 14:20:38.002877 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 14:20:38.002882 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 14:20:38.002887 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 14:20:38.002892 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 14:20:38.002898 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 14:20:38.002903 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 14:20:38.002908 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 14:20:38.002913 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 14:20:38.002918 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 14:20:38.002923 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 14:20:38.002928 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 14:20:38.002934 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 14:20:38.002939 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002944 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 14:20:38.002949 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 14:20:38.002954 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002959 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002964 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 14:20:38.002968 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 14:20:38.002973 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002979 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002984 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 14:20:38.002989 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:20:38.002994 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 14:20:38.002999 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 14:20:38.003004 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 14:20:38.003009 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 14:20:38.003014 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 14:20:38.003019 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 14:20:38.003024 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 14:20:38.003029 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 14:20:38.003034 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 14:20:38.003039 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 14:20:38.003044 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 14:20:38.003049 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 14:20:38.003054 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 14:20:38.003059 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 14:20:38.003065 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 14:20:38.003070 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 14:20:38.003075 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 14:20:38.003079 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 14:20:38.003084 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 14:20:38.003089 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 14:20:38.003094 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 14:20:38.003099 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 14:20:38.003104 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 14:20:38.003109 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 14:20:38.003115 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 14:20:38.003120 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 14:20:38.003124 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 14:20:38.003129 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 14:20:38.003134 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 14:20:38.003139 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 14:20:38.003144 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 14:20:38.003149 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 14:20:38.003154 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 14:20:38.003159 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 14:20:38.003164 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 14:20:38.003169 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 14:20:38.003174 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 14:20:38.003179 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 14:20:38.003184 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 14:20:38.003189 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 14:20:38.003194 kernel: No NUMA configuration found Jan 30 14:20:38.003199 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 14:20:38.003204 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 14:20:38.003210 kernel: Zone ranges: Jan 30 14:20:38.003215 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:20:38.003220 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 14:20:38.003225 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:20:38.003230 kernel: Movable zone start for each node Jan 30 14:20:38.003234 kernel: Early memory node ranges Jan 30 14:20:38.003239 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 14:20:38.003244 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 14:20:38.003250 kernel: node 0: [mem 0x0000000040400000-0x0000000081b10fff] Jan 30 14:20:38.003255 kernel: node 0: [mem 0x0000000081b13000-0x000000008afccfff] Jan 30 14:20:38.003260 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 14:20:38.003265 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 14:20:38.003273 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:20:38.003279 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 14:20:38.003284 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:20:38.003290 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 14:20:38.003296 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 14:20:38.003304 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 14:20:38.003309 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 14:20:38.003314 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 14:20:38.003320 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 14:20:38.003325 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 14:20:38.003330 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 14:20:38.003336 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 14:20:38.003341 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 14:20:38.003347 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 14:20:38.003353 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 14:20:38.003358 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 14:20:38.003363 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 14:20:38.003368 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 14:20:38.003374 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 14:20:38.003379 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 14:20:38.003384 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 14:20:38.003390 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 14:20:38.003395 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 14:20:38.003401 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 14:20:38.003406 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 14:20:38.003412 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 14:20:38.003417 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 14:20:38.003422 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 14:20:38.003427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:20:38.003433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:20:38.003438 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:20:38.003443 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 14:20:38.003450 kernel: TSC deadline timer available Jan 30 14:20:38.003455 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 14:20:38.003460 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 14:20:38.003466 kernel: Booting paravirtualized kernel on bare hardware Jan 30 14:20:38.003471 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:20:38.003476 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 14:20:38.003482 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 14:20:38.003487 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 14:20:38.003492 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 14:20:38.003499 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:20:38.003505 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:20:38.003510 kernel: random: crng init done Jan 30 14:20:38.003515 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 14:20:38.003520 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 14:20:38.003526 kernel: Fallback order for Node 0: 0 Jan 30 14:20:38.003531 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 14:20:38.003536 kernel: Policy zone: Normal Jan 30 14:20:38.003543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:20:38.003548 kernel: software IO TLB: area num 16. Jan 30 14:20:38.003553 kernel: Memory: 32720296K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 732424K reserved, 0K cma-reserved) Jan 30 14:20:38.003559 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 14:20:38.003564 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:20:38.003569 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:20:38.003575 kernel: Dynamic Preempt: voluntary Jan 30 14:20:38.003580 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:20:38.003586 kernel: rcu: RCU event tracing is enabled. Jan 30 14:20:38.003592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 14:20:38.003597 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:20:38.003603 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:20:38.003608 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:20:38.003613 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:20:38.003619 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 14:20:38.003624 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 14:20:38.003629 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:20:38.003634 kernel: Console: colour dummy device 80x25 Jan 30 14:20:38.003641 kernel: printk: console [tty0] enabled Jan 30 14:20:38.003646 kernel: printk: console [ttyS1] enabled Jan 30 14:20:38.003651 kernel: ACPI: Core revision 20230628 Jan 30 14:20:38.003657 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 14:20:38.003662 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:20:38.003667 kernel: DMAR: Host address width 39 Jan 30 14:20:38.003673 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 14:20:38.003678 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 14:20:38.003683 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 14:20:38.003690 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 14:20:38.003695 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 14:20:38.003700 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 14:20:38.003706 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 14:20:38.003711 kernel: x2apic enabled Jan 30 14:20:38.003716 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 14:20:38.003722 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 14:20:38.003727 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 14:20:38.003733 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 14:20:38.003739 kernel: process: using mwait in idle threads Jan 30 14:20:38.003744 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 14:20:38.003749 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 14:20:38.003754 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:20:38.003760 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 14:20:38.003765 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 14:20:38.003770 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 14:20:38.003775 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:20:38.003781 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 14:20:38.003786 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 14:20:38.003791 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:20:38.003797 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:20:38.003802 kernel: TAA: Mitigation: TSX disabled Jan 30 14:20:38.003808 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 14:20:38.003813 kernel: SRBDS: Mitigation: Microcode Jan 30 14:20:38.003818 kernel: GDS: Mitigation: Microcode Jan 30 14:20:38.003824 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:20:38.003829 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:20:38.003834 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:20:38.003839 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 14:20:38.003844 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 14:20:38.003850 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:20:38.003856 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 14:20:38.003861 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 14:20:38.003866 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 14:20:38.003872 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:20:38.003877 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:20:38.003882 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:20:38.003888 kernel: landlock: Up and running. Jan 30 14:20:38.003893 kernel: SELinux: Initializing. Jan 30 14:20:38.003898 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.003904 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.003909 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 14:20:38.003914 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:20:38.003921 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:20:38.003926 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:20:38.003931 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 14:20:38.003937 kernel: ... version: 4 Jan 30 14:20:38.003942 kernel: ... bit width: 48 Jan 30 14:20:38.003947 kernel: ... generic registers: 4 Jan 30 14:20:38.003952 kernel: ... value mask: 0000ffffffffffff Jan 30 14:20:38.003958 kernel: ... max period: 00007fffffffffff Jan 30 14:20:38.003964 kernel: ... fixed-purpose events: 3 Jan 30 14:20:38.003969 kernel: ... event mask: 000000070000000f Jan 30 14:20:38.003974 kernel: signal: max sigframe size: 2032 Jan 30 14:20:38.003980 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 14:20:38.003985 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:20:38.003991 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:20:38.003996 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 14:20:38.004001 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:20:38.004006 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:20:38.004013 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 14:20:38.004018 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 14:20:38.004024 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 14:20:38.004029 kernel: smpboot: Max logical packages: 1 Jan 30 14:20:38.004034 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 14:20:38.004039 kernel: devtmpfs: initialized Jan 30 14:20:38.004045 kernel: x86/mm: Memory block size: 128MB Jan 30 14:20:38.004050 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b11000-0x81b11fff] (4096 bytes) Jan 30 14:20:38.004055 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 14:20:38.004062 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:20:38.004067 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 14:20:38.004072 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:20:38.004078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:20:38.004083 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:20:38.004088 kernel: audit: type=2000 audit(1738246832.039:1): state=initialized audit_enabled=0 res=1 Jan 30 14:20:38.004093 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:20:38.004099 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:20:38.004104 kernel: cpuidle: using governor menu Jan 30 14:20:38.004110 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:20:38.004115 kernel: dca service started, version 1.12.1 Jan 30 14:20:38.004121 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 14:20:38.004126 kernel: PCI: Using configuration type 1 for base access Jan 30 14:20:38.004131 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 14:20:38.004137 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:20:38.004142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:20:38.004147 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:20:38.004152 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:20:38.004159 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:20:38.004164 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:20:38.004169 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:20:38.004174 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:20:38.004180 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:20:38.004185 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 14:20:38.004190 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004196 kernel: ACPI: SSDT 0xFFFF96B801607400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 14:20:38.004201 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004207 kernel: ACPI: SSDT 0xFFFF96B8015FF000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 14:20:38.004212 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004218 kernel: ACPI: SSDT 0xFFFF96B8015E5600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 14:20:38.004223 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004228 kernel: ACPI: SSDT 0xFFFF96B8015FC000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 14:20:38.004233 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004239 kernel: ACPI: SSDT 0xFFFF96B80160A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 14:20:38.004244 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004249 kernel: ACPI: SSDT 0xFFFF96B801606400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 14:20:38.004255 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 14:20:38.004261 kernel: ACPI: Interpreter enabled Jan 30 14:20:38.004266 kernel: ACPI: PM: (supports S0 S5) Jan 30 14:20:38.004271 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:20:38.004276 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 14:20:38.004282 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 14:20:38.004287 kernel: HEST: Table parsing has been initialized. Jan 30 14:20:38.004292 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 14:20:38.004297 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:20:38.004306 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:20:38.004311 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 14:20:38.004336 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 14:20:38.004341 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 14:20:38.004361 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 14:20:38.004366 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 14:20:38.004371 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 14:20:38.004377 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 14:20:38.004382 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 14:20:38.004387 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 14:20:38.004393 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 14:20:38.004399 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 14:20:38.004404 kernel: ACPI: \PIN_: New power resource Jan 30 14:20:38.004409 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 14:20:38.004483 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:20:38.004536 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 14:20:38.004583 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 14:20:38.004593 kernel: PCI host bridge to bus 0000:00 Jan 30 14:20:38.004642 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:20:38.004686 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:20:38.004727 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:20:38.004769 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 14:20:38.004810 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 14:20:38.004851 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 14:20:38.004912 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 14:20:38.004970 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 14:20:38.005019 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.005072 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 14:20:38.005119 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 14:20:38.005171 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 14:20:38.005222 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 14:20:38.005275 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 14:20:38.005326 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 14:20:38.005374 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 14:20:38.005425 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 14:20:38.005472 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 14:20:38.005522 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 14:20:38.005573 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 14:20:38.005621 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:20:38.005673 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 14:20:38.005721 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:20:38.005771 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 14:20:38.005821 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 14:20:38.005870 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 14:20:38.005928 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 14:20:38.005979 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 14:20:38.006025 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 14:20:38.006078 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 14:20:38.006125 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 14:20:38.006175 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 14:20:38.006225 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 14:20:38.006275 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 14:20:38.006414 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 14:20:38.006465 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 14:20:38.006523 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 14:20:38.006571 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 14:20:38.006623 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 14:20:38.006669 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 14:20:38.006721 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 14:20:38.006770 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.006825 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 14:20:38.006875 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.006927 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 14:20:38.006976 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.007027 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 14:20:38.007076 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.007130 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 14:20:38.007179 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.007230 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 14:20:38.007279 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:20:38.007340 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 14:20:38.007392 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 14:20:38.007444 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 14:20:38.007491 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 14:20:38.007543 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 14:20:38.007591 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 14:20:38.007647 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 14:20:38.007697 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 14:20:38.007749 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 14:20:38.007798 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 14:20:38.007847 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:20:38.007897 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:20:38.007950 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 14:20:38.007999 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 14:20:38.008048 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 14:20:38.008100 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 14:20:38.008149 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:20:38.008198 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:20:38.008247 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:20:38.008297 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:20:38.008351 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:20:38.008400 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:20:38.008454 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 14:20:38.008507 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:20:38.008556 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 14:20:38.008604 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 14:20:38.008654 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 14:20:38.008702 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.008751 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:20:38.008800 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:20:38.008851 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:20:38.008907 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 14:20:38.008956 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:20:38.009006 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 14:20:38.009055 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 14:20:38.009104 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 14:20:38.009153 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.009205 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:20:38.009252 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:20:38.009304 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:20:38.009354 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:20:38.009407 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 14:20:38.009457 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 14:20:38.009505 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 14:20:38.009555 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:20:38.009606 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:20:38.009655 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.009702 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.009756 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 14:20:38.009813 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 14:20:38.009865 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 14:20:38.009916 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 14:20:38.009969 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 14:20:38.010021 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:20:38.010072 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 14:20:38.010123 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:20:38.010173 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:20:38.010223 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.010273 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.010281 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 14:20:38.010289 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 14:20:38.010294 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 14:20:38.010304 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 14:20:38.010329 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 14:20:38.010334 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 14:20:38.010340 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 14:20:38.010359 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 14:20:38.010365 kernel: iommu: Default domain type: Translated Jan 30 14:20:38.010371 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:20:38.010378 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:20:38.010383 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:20:38.010389 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 14:20:38.010395 kernel: e820: reserve RAM buffer [mem 0x81b11000-0x83ffffff] Jan 30 14:20:38.010400 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 14:20:38.010406 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 14:20:38.010411 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 14:20:38.010416 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 14:20:38.010469 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 14:20:38.010521 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 14:20:38.010573 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:20:38.010582 kernel: vgaarb: loaded Jan 30 14:20:38.010588 kernel: clocksource: Switched to clocksource tsc-early Jan 30 14:20:38.010593 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:20:38.010599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:20:38.010605 kernel: pnp: PnP ACPI init Jan 30 14:20:38.010653 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 14:20:38.010706 kernel: pnp 00:02: [dma 0 disabled] Jan 30 14:20:38.010754 kernel: pnp 00:03: [dma 0 disabled] Jan 30 14:20:38.010801 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 14:20:38.010845 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 14:20:38.010893 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 14:20:38.010939 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 14:20:38.010986 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 14:20:38.011030 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 14:20:38.011074 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 14:20:38.011120 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 14:20:38.011164 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 14:20:38.011207 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 14:20:38.011252 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 14:20:38.011326 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 14:20:38.011386 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 14:20:38.011430 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 14:20:38.011473 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 14:20:38.011517 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 14:20:38.011559 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 14:20:38.011603 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 14:20:38.011652 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 14:20:38.011661 kernel: pnp: PnP ACPI: found 10 devices Jan 30 14:20:38.011667 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:20:38.011672 kernel: NET: Registered PF_INET protocol family Jan 30 14:20:38.011678 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:20:38.011684 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 14:20:38.011689 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:20:38.011695 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:20:38.011702 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 14:20:38.011708 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 14:20:38.011714 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.011719 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.011725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:20:38.011731 kernel: NET: Registered PF_XDP protocol family Jan 30 14:20:38.011779 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 14:20:38.011828 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 14:20:38.011879 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 14:20:38.011929 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:20:38.011980 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:20:38.012030 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:20:38.012078 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:20:38.012127 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:20:38.012174 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:20:38.012222 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:20:38.012271 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:20:38.012344 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:20:38.012409 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:20:38.012457 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:20:38.012505 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:20:38.012557 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:20:38.012604 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:20:38.012652 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:20:38.012700 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:20:38.012751 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.012800 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.012847 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:20:38.012896 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.012943 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.012990 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 14:20:38.013032 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:20:38.013075 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:20:38.013116 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:20:38.013159 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 14:20:38.013200 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 14:20:38.013251 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 14:20:38.013297 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:20:38.013383 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 14:20:38.013427 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 14:20:38.013476 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:20:38.013519 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 14:20:38.013568 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 14:20:38.013614 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:20:38.013660 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 14:20:38.013706 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:20:38.013713 kernel: PCI: CLS 64 bytes, default 64 Jan 30 14:20:38.013719 kernel: DMAR: No ATSR found Jan 30 14:20:38.013725 kernel: DMAR: No SATC found Jan 30 14:20:38.013731 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 14:20:38.013778 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 14:20:38.013830 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 14:20:38.013877 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 14:20:38.013926 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 14:20:38.013972 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 14:20:38.014020 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 14:20:38.014066 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 14:20:38.014113 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 14:20:38.014160 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 14:20:38.014207 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 14:20:38.014256 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 14:20:38.014307 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 14:20:38.014389 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 14:20:38.014437 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 14:20:38.014485 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 14:20:38.014532 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 14:20:38.014579 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 14:20:38.014626 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 14:20:38.014677 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 14:20:38.014724 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 14:20:38.014772 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 14:20:38.014821 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 14:20:38.014870 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 14:20:38.014919 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 14:20:38.014969 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 14:20:38.015018 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 14:20:38.015070 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 14:20:38.015078 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 14:20:38.015084 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:20:38.015090 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 14:20:38.015096 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 14:20:38.015102 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 14:20:38.015107 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 14:20:38.015113 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 14:20:38.015166 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 14:20:38.015176 kernel: Initialise system trusted keyrings Jan 30 14:20:38.015182 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 14:20:38.015188 kernel: Key type asymmetric registered Jan 30 14:20:38.015193 kernel: Asymmetric key parser 'x509' registered Jan 30 14:20:38.015199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:20:38.015205 kernel: io scheduler mq-deadline registered Jan 30 14:20:38.015210 kernel: io scheduler kyber registered Jan 30 14:20:38.015216 kernel: io scheduler bfq registered Jan 30 14:20:38.015263 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 14:20:38.015336 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 14:20:38.015399 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 14:20:38.015448 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 14:20:38.015495 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 14:20:38.015543 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 14:20:38.015595 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 14:20:38.015606 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 14:20:38.015612 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 14:20:38.015617 kernel: pstore: Using crash dump compression: deflate Jan 30 14:20:38.015623 kernel: pstore: Registered erst as persistent store backend Jan 30 14:20:38.015629 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:20:38.015635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:20:38.015640 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:20:38.015646 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 14:20:38.015652 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 14:20:38.015701 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 14:20:38.015709 kernel: i8042: PNP: No PS/2 controller found. Jan 30 14:20:38.015752 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 14:20:38.015797 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 14:20:38.015841 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T14:20:36 UTC (1738246836) Jan 30 14:20:38.015884 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 14:20:38.015892 kernel: intel_pstate: Intel P-state driver initializing Jan 30 14:20:38.015898 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 14:20:38.015906 kernel: intel_pstate: HWP enabled Jan 30 14:20:38.015911 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 30 14:20:38.015917 kernel: vesafb: scrolling: redraw Jan 30 14:20:38.015923 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 30 14:20:38.015929 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000075929ad5, using 768k, total 768k Jan 30 14:20:38.015934 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:20:38.015940 kernel: fb0: VESA VGA frame buffer device Jan 30 14:20:38.015945 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:20:38.015951 kernel: Segment Routing with IPv6 Jan 30 14:20:38.015958 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:20:38.015964 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:20:38.015969 kernel: Key type dns_resolver registered Jan 30 14:20:38.015975 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 14:20:38.015981 kernel: IPI shorthand broadcast: enabled Jan 30 14:20:38.015986 kernel: sched_clock: Marking stable (2475001102, 1384805382)->(4404140979, -544334495) Jan 30 14:20:38.015992 kernel: registered taskstats version 1 Jan 30 14:20:38.015997 kernel: Loading compiled-in X.509 certificates Jan 30 14:20:38.016003 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:20:38.016010 kernel: Key type .fscrypt registered Jan 30 14:20:38.016015 kernel: Key type fscrypt-provisioning registered Jan 30 14:20:38.016021 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:20:38.016027 kernel: ima: No architecture policies found Jan 30 14:20:38.016032 kernel: clk: Disabling unused clocks Jan 30 14:20:38.016038 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:20:38.016044 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:20:38.016049 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:20:38.016055 kernel: Run /init as init process Jan 30 14:20:38.016061 kernel: with arguments: Jan 30 14:20:38.016067 kernel: /init Jan 30 14:20:38.016073 kernel: with environment: Jan 30 14:20:38.016078 kernel: HOME=/ Jan 30 14:20:38.016084 kernel: TERM=linux Jan 30 14:20:38.016089 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:20:38.016096 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:20:38.016104 systemd[1]: Detected architecture x86-64. Jan 30 14:20:38.016110 systemd[1]: Running in initrd. Jan 30 14:20:38.016116 systemd[1]: No hostname configured, using default hostname. Jan 30 14:20:38.016122 systemd[1]: Hostname set to . Jan 30 14:20:38.016127 systemd[1]: Initializing machine ID from random generator. Jan 30 14:20:38.016133 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:20:38.016139 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:20:38.016145 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:20:38.016152 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:20:38.016158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:20:38.016164 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:20:38.016170 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:20:38.016177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:20:38.016183 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:20:38.016189 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 30 14:20:38.016196 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 30 14:20:38.016201 kernel: clocksource: Switched to clocksource tsc Jan 30 14:20:38.016207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:20:38.016213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:20:38.016219 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:20:38.016225 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:20:38.016231 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:20:38.016237 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:20:38.016243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:20:38.016250 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:20:38.016256 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:20:38.016261 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:20:38.016267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:20:38.016273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:20:38.016279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:20:38.016285 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:20:38.016291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:20:38.016298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:20:38.016307 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:20:38.016313 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:20:38.016340 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:20:38.016371 systemd-journald[269]: Collecting audit messages is disabled. Jan 30 14:20:38.016386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:20:38.016393 systemd-journald[269]: Journal started Jan 30 14:20:38.016406 systemd-journald[269]: Runtime Journal (/run/log/journal/6d43a6d87896443cbc8fa9a1913a8bb2) is 8.0M, max 639.9M, 631.9M free. Jan 30 14:20:38.039432 systemd-modules-load[272]: Inserted module 'overlay' Jan 30 14:20:38.061303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:20:38.089813 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:20:38.154560 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:20:38.154600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:20:38.154618 kernel: Bridge firewalling registered Jan 30 14:20:38.132771 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:20:38.151282 systemd-modules-load[272]: Inserted module 'br_netfilter' Jan 30 14:20:38.165703 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:20:38.184678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:20:38.192662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:38.223660 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:20:38.227284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:20:38.244057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:20:38.244508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:20:38.247720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:20:38.249021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:20:38.249735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:20:38.250863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:20:38.252022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:20:38.255715 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:20:38.259570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:38.260362 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:20:38.271427 systemd-resolved[300]: Positive Trust Anchors: Jan 30 14:20:38.271432 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:20:38.271454 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:20:38.392662 dracut-cmdline[308]: dracut-dracut-053 Jan 30 14:20:38.392662 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:20:38.273011 systemd-resolved[300]: Defaulting to hostname 'linux'. Jan 30 14:20:38.290552 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:20:38.290676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:20:38.551351 kernel: SCSI subsystem initialized Jan 30 14:20:38.573332 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:20:38.596321 kernel: iscsi: registered transport (tcp) Jan 30 14:20:38.628284 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:20:38.628305 kernel: QLogic iSCSI HBA Driver Jan 30 14:20:38.661593 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:20:38.684588 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:20:38.739038 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:20:38.739057 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:20:38.758691 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:20:38.816339 kernel: raid6: avx2x4 gen() 53356 MB/s Jan 30 14:20:38.848377 kernel: raid6: avx2x2 gen() 53894 MB/s Jan 30 14:20:38.884751 kernel: raid6: avx2x1 gen() 45251 MB/s Jan 30 14:20:38.884769 kernel: raid6: using algorithm avx2x2 gen() 53894 MB/s Jan 30 14:20:38.931801 kernel: raid6: .... xor() 30511 MB/s, rmw enabled Jan 30 14:20:38.931820 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:20:38.972328 kernel: xor: automatically using best checksumming function avx Jan 30 14:20:39.089335 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:20:39.095022 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:20:39.122614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:20:39.129191 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jan 30 14:20:39.133417 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:20:39.169525 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:20:39.205161 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 30 14:20:39.221991 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:20:39.247658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:20:39.332109 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:20:39.364660 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:20:39.364704 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:20:39.375352 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:20:39.395306 kernel: libata version 3.00 loaded. Jan 30 14:20:39.408307 kernel: ACPI: bus type USB registered Jan 30 14:20:39.408345 kernel: PTP clock support registered Jan 30 14:20:39.408360 kernel: usbcore: registered new interface driver usbfs Jan 30 14:20:39.404811 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:20:39.485536 kernel: usbcore: registered new interface driver hub Jan 30 14:20:39.485552 kernel: usbcore: registered new device driver usb Jan 30 14:20:39.485560 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:20:39.470167 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:20:39.501279 kernel: AES CTR mode by8 optimization enabled Jan 30 14:20:39.501297 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 14:20:39.836622 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:20:39.836718 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 14:20:39.836784 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 14:20:39.836846 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 14:20:39.836907 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 14:20:39.836966 kernel: scsi host0: ahci Jan 30 14:20:39.837038 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:20:39.837100 kernel: scsi host1: ahci Jan 30 14:20:39.837159 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 14:20:39.837219 kernel: scsi host2: ahci Jan 30 14:20:39.837279 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 14:20:39.837348 kernel: scsi host3: ahci Jan 30 14:20:39.837410 kernel: hub 1-0:1.0: USB hub found Jan 30 14:20:39.837479 kernel: scsi host4: ahci Jan 30 14:20:39.837537 kernel: hub 1-0:1.0: 16 ports detected Jan 30 14:20:39.837595 kernel: scsi host5: ahci Jan 30 14:20:39.837658 kernel: hub 2-0:1.0: USB hub found Jan 30 14:20:39.837728 kernel: scsi host6: ahci Jan 30 14:20:39.837793 kernel: hub 2-0:1.0: 10 ports detected Jan 30 14:20:39.837859 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 14:20:39.837868 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 14:20:39.837876 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 14:20:39.837883 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 14:20:39.837890 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 14:20:39.837898 kernel: pps pps0: new PPS source ptp0 Jan 30 14:20:39.837962 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 14:20:39.837970 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 14:20:40.030711 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 14:20:40.030721 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:20:40.030793 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 14:20:40.077765 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 14:20:40.077776 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 14:20:40.077783 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d4:36 Jan 30 14:20:40.077858 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 14:20:40.077923 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:20:40.077988 kernel: hub 1-14:1.0: USB hub found Jan 30 14:20:40.078063 kernel: hub 1-14:1.0: 4 ports detected Jan 30 14:20:40.078126 kernel: pps pps1: new PPS source ptp1 Jan 30 14:20:39.501133 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:20:40.184923 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 14:20:40.185123 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:20:40.185305 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d4:37 Jan 30 14:20:40.185477 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 14:20:40.185635 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.185653 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:20:40.185809 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:39.623368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:20:40.220801 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.220815 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.022418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:20:40.313236 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:20:40.313252 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:20:40.313262 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.313274 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:20:40.313286 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:20:40.051420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:20:40.378512 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:20:40.378526 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Jan 30 14:20:41.193467 kernel: ata1.00: Features: NCQ-prio Jan 30 14:20:41.193478 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:20:41.193553 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:20:41.193562 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 14:20:41.193672 kernel: ata2.00: Features: NCQ-prio Jan 30 14:20:41.193681 kernel: ata1.00: configured for UDMA/133 Jan 30 14:20:41.193688 kernel: ata2.00: configured for UDMA/133 Jan 30 14:20:41.193695 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:20:41.193767 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:20:41.193832 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:20:41.193841 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 14:20:41.193908 kernel: usbcore: registered new interface driver usbhid Jan 30 14:20:41.193916 kernel: usbhid: USB HID core driver Jan 30 14:20:41.193924 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 14:20:41.193931 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 14:20:41.193996 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.194006 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:20:41.194073 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:20:41.194081 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:20:41.194141 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:20:41.194200 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:20:41.194259 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 14:20:41.194333 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 14:20:41.194395 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:20:41.194455 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 14:20:41.194514 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:20:41.194522 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 14:20:41.194578 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 14:20:41.194641 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 14:20:41.194712 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 30 14:20:41.194771 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 14:20:41.194781 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 30 14:20:41.194838 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 14:20:41.194905 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 14:20:41.194962 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:20:41.195019 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 14:20:41.195079 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.195088 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:20:41.195097 kernel: GPT:9289727 != 937703087 Jan 30 14:20:41.195104 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:20:41.195111 kernel: GPT:9289727 != 937703087 Jan 30 14:20:41.195118 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:20:41.195125 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.195132 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 30 14:20:41.195189 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (546) Jan 30 14:20:41.195197 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (552) Jan 30 14:20:41.195205 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:20:41.195268 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Jan 30 14:20:41.806007 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:20:41.806363 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.806402 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.806443 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.806477 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.806511 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.806551 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.806579 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:20:41.806911 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 14:20:41.807243 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:20:40.051456 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:40.200204 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:20:41.847432 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 14:20:41.847596 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 14:20:40.442439 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:20:40.489419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:20:40.489454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:40.512398 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:20:41.893394 disk-uuid[708]: Primary Header is updated. Jan 30 14:20:41.893394 disk-uuid[708]: Secondary Entries is updated. Jan 30 14:20:41.893394 disk-uuid[708]: Secondary Header is updated. Jan 30 14:20:40.536835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:20:40.575515 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:20:40.970630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:41.090462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:20:41.103528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:41.137574 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 30 14:20:41.181977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 30 14:20:41.216421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:20:41.245373 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:20:41.259961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 14:20:41.292433 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:20:42.389698 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:42.409343 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:42.409360 disk-uuid[709]: The operation has completed successfully. Jan 30 14:20:42.445076 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:20:42.445124 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:20:42.500596 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:20:42.537375 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:20:42.537431 sh[738]: Success Jan 30 14:20:42.572481 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:20:42.594310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:20:42.602646 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:20:42.653268 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:20:42.653288 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:42.674668 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:20:42.693664 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:20:42.711637 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:20:42.749338 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:20:42.751873 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:20:42.760598 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:20:42.769540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:20:42.876570 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:42.876589 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:42.876597 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:20:42.876604 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:20:42.876611 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:20:42.810337 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:20:42.913558 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:42.913615 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:20:42.924133 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:20:42.962160 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:20:42.970622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:20:43.000686 ignition[826]: Ignition 2.19.0 Jan 30 14:20:43.000692 ignition[826]: Stage: fetch-offline Jan 30 14:20:43.002852 unknown[826]: fetched base config from "system" Jan 30 14:20:43.000716 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:43.002856 unknown[826]: fetched user config from "system" Jan 30 14:20:43.000724 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:43.022665 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:20:43.000795 ignition[826]: parsed url from cmdline: "" Jan 30 14:20:43.036333 systemd-networkd[921]: lo: Link UP Jan 30 14:20:43.000799 ignition[826]: no config URL provided Jan 30 14:20:43.036336 systemd-networkd[921]: lo: Gained carrier Jan 30 14:20:43.000802 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:20:43.039420 systemd-networkd[921]: Enumeration completed Jan 30 14:20:43.000825 ignition[826]: parsing config with SHA512: 74ca5d76fb651e948060909f1be80c0f5d3c4a53701845a2a7c0a287c56e09413bcda56eed69f4bded0cae42e5348479b42d2dd22ccc75755d71666fe457e2d8 Jan 30 14:20:43.039511 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:20:43.003067 ignition[826]: fetch-offline: fetch-offline passed Jan 30 14:20:43.040448 systemd-networkd[921]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.003070 ignition[826]: POST message to Packet Timeline Jan 30 14:20:43.054518 systemd[1]: Reached target network.target - Network. Jan 30 14:20:43.003072 ignition[826]: POST Status error: resource requires networking Jan 30 14:20:43.068180 systemd-networkd[921]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.003111 ignition[826]: Ignition finished successfully Jan 30 14:20:43.071429 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 14:20:43.110198 ignition[934]: Ignition 2.19.0 Jan 30 14:20:43.089614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:20:43.110212 ignition[934]: Stage: kargs Jan 30 14:20:43.283422 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 14:20:43.097071 systemd-networkd[921]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.110567 ignition[934]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:43.279471 systemd-networkd[921]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.110589 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:43.112292 ignition[934]: kargs: kargs passed Jan 30 14:20:43.112322 ignition[934]: POST message to Packet Timeline Jan 30 14:20:43.112351 ignition[934]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:43.113495 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48797->[::1]:53: read: connection refused Jan 30 14:20:43.313837 ignition[934]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 14:20:43.314694 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33827->[::1]:53: read: connection refused Jan 30 14:20:43.507440 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 14:20:43.507941 systemd-networkd[921]: eno1: Link UP Jan 30 14:20:43.508109 systemd-networkd[921]: eno2: Link UP Jan 30 14:20:43.508229 systemd-networkd[921]: enp1s0f0np0: Link UP Jan 30 14:20:43.508389 systemd-networkd[921]: enp1s0f0np0: Gained carrier Jan 30 14:20:43.517471 systemd-networkd[921]: enp1s0f1np1: Link UP Jan 30 14:20:43.539393 systemd-networkd[921]: enp1s0f0np0: DHCPv4 address 139.178.70.237/31, gateway 139.178.70.236 acquired from 145.40.83.140 Jan 30 14:20:43.715112 ignition[934]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 14:20:43.716017 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58646->[::1]:53: read: connection refused Jan 30 14:20:44.307056 systemd-networkd[921]: enp1s0f1np1: Gained carrier Jan 30 14:20:44.516644 ignition[934]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 14:20:44.517824 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48756->[::1]:53: read: connection refused Jan 30 14:20:44.562914 systemd-networkd[921]: enp1s0f0np0: Gained IPv6LL Jan 30 14:20:45.650919 systemd-networkd[921]: enp1s0f1np1: Gained IPv6LL Jan 30 14:20:46.119570 ignition[934]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 14:20:46.120735 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53399->[::1]:53: read: connection refused Jan 30 14:20:49.324331 ignition[934]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 14:20:50.450888 ignition[934]: GET result: OK Jan 30 14:20:50.835045 ignition[934]: Ignition finished successfully Jan 30 14:20:50.839941 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:20:50.864638 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:20:50.870817 ignition[953]: Ignition 2.19.0 Jan 30 14:20:50.870821 ignition[953]: Stage: disks Jan 30 14:20:50.870923 ignition[953]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:50.870929 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:50.871426 ignition[953]: disks: disks passed Jan 30 14:20:50.871429 ignition[953]: POST message to Packet Timeline Jan 30 14:20:50.871437 ignition[953]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:51.111685 ignition[953]: GET result: OK Jan 30 14:20:51.507483 ignition[953]: Ignition finished successfully Jan 30 14:20:51.510893 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:20:51.526535 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:20:51.544578 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:20:51.565542 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:20:51.586691 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:20:51.606696 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:20:51.635574 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:20:51.670499 systemd-fsck[971]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:20:51.682036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:20:51.707545 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:20:51.800368 kernel: EXT4-fs (sdb9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:20:51.800883 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:20:51.810717 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:20:51.851775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:20:51.861281 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:20:51.975700 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (980) Jan 30 14:20:51.975715 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:51.975723 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:51.975730 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:20:51.975737 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:20:51.975744 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:20:51.900943 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:20:51.976075 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 14:20:52.006418 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:20:52.043603 coreos-metadata[982]: Jan 30 14:20:52.037 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:20:52.006440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:20:52.083471 coreos-metadata[998]: Jan 30 14:20:52.037 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:20:52.026332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:20:52.052679 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:20:52.086550 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:20:52.131428 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:20:52.141419 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:20:52.152356 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:20:52.163347 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:20:52.169579 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:20:52.180610 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:20:52.216335 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:20:52.234500 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:52.227134 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:20:52.242513 ignition[1100]: INFO : Ignition 2.19.0 Jan 30 14:20:52.242513 ignition[1100]: INFO : Stage: mount Jan 30 14:20:52.242513 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:52.242513 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:52.242513 ignition[1100]: INFO : mount: mount passed Jan 30 14:20:52.242513 ignition[1100]: INFO : POST message to Packet Timeline Jan 30 14:20:52.242513 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:52.312434 coreos-metadata[998]: Jan 30 14:20:52.293 INFO Fetch successful Jan 30 14:20:52.252695 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:20:52.354544 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 14:20:52.354610 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 14:20:52.468110 coreos-metadata[982]: Jan 30 14:20:52.468 INFO Fetch successful Jan 30 14:20:52.541307 coreos-metadata[982]: Jan 30 14:20:52.541 INFO wrote hostname ci-4081.3.0-a-b3fea05ed8 to /sysroot/etc/hostname Jan 30 14:20:52.542796 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:20:52.885101 ignition[1100]: INFO : GET result: OK Jan 30 14:20:53.185165 ignition[1100]: INFO : Ignition finished successfully Jan 30 14:20:53.187894 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:20:53.219530 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:20:53.229451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:20:53.298394 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1123) Jan 30 14:20:53.298421 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:53.317717 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:53.334896 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:20:53.371589 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:20:53.371612 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:20:53.383974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:20:53.415998 ignition[1140]: INFO : Ignition 2.19.0 Jan 30 14:20:53.415998 ignition[1140]: INFO : Stage: files Jan 30 14:20:53.429568 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:53.429568 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:53.429568 ignition[1140]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:20:53.429568 ignition[1140]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 14:20:53.419990 unknown[1140]: wrote ssh authorized keys file for user: core Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:53.812641 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 14:20:54.061688 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:20:54.226996 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:54.226996 ignition[1140]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: files passed Jan 30 14:20:54.257522 ignition[1140]: INFO : POST message to Packet Timeline Jan 30 14:20:54.257522 ignition[1140]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:54.872036 ignition[1140]: INFO : GET result: OK Jan 30 14:20:55.714967 ignition[1140]: INFO : Ignition finished successfully Jan 30 14:20:55.718254 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:20:55.745588 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:20:55.755892 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:20:55.765692 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:20:55.765748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:20:55.832817 initrd-setup-root-after-ignition[1179]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:20:55.832817 initrd-setup-root-after-ignition[1179]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:20:55.847708 initrd-setup-root-after-ignition[1183]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:20:55.837359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:20:55.872639 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:20:55.916791 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:20:55.960417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:20:55.960467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:20:55.979708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:20:56.000497 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:20:56.021600 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:20:56.032588 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:20:56.106858 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:20:56.135748 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:20:56.164598 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:20:56.176919 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:20:56.197989 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:20:56.215918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:20:56.216348 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:20:56.245030 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:20:56.266920 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:20:56.284925 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:20:56.303919 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:20:56.324913 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:20:56.345912 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:20:56.365919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:20:56.386952 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:20:56.408931 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:20:56.428911 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:20:56.446798 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:20:56.447201 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:20:56.482778 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:20:56.492934 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:20:56.514788 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:20:56.515249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:20:56.537782 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:20:56.538181 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:20:56.569867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:20:56.570339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:20:56.590120 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:20:56.608781 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:20:56.609241 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:20:56.630003 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:20:56.648022 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:20:56.665945 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:20:56.666251 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:20:56.686043 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:20:56.686381 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:20:56.709072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:20:56.709508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:20:56.728098 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:20:56.827587 ignition[1203]: INFO : Ignition 2.19.0 Jan 30 14:20:56.827587 ignition[1203]: INFO : Stage: umount Jan 30 14:20:56.827587 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:56.827587 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:56.827587 ignition[1203]: INFO : umount: umount passed Jan 30 14:20:56.827587 ignition[1203]: INFO : POST message to Packet Timeline Jan 30 14:20:56.827587 ignition[1203]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:56.728508 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:20:56.745981 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:20:56.746394 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:20:56.776435 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:20:56.799026 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:20:56.816446 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:20:56.816589 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:20:56.846956 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:20:56.847352 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:20:56.897738 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:20:56.902353 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:20:56.902605 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:20:56.953774 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:20:56.953850 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:20:58.316019 ignition[1203]: INFO : GET result: OK Jan 30 14:20:58.655487 ignition[1203]: INFO : Ignition finished successfully Jan 30 14:20:58.658848 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:20:58.659103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:20:58.675178 systemd[1]: Stopped target network.target - Network. Jan 30 14:20:58.691512 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:20:58.691766 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:20:58.709714 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:20:58.709853 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:20:58.727799 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:20:58.727957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:20:58.745770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:20:58.745938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:20:58.763800 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:20:58.763968 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:20:58.782121 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:20:58.791457 systemd-networkd[921]: enp1s0f1np1: DHCPv6 lease lost Jan 30 14:20:58.799555 systemd-networkd[921]: enp1s0f0np0: DHCPv6 lease lost Jan 30 14:20:58.799788 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:20:58.818407 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:20:58.818682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:20:58.837635 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:20:58.837988 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:20:58.858064 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:20:58.858188 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:20:58.887552 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:20:58.896500 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:20:58.896661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:20:58.906878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:20:58.907042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:20:58.936636 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:20:58.936786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:20:58.955695 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:20:58.955859 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:20:58.974944 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:20:58.997607 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:20:58.997976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:20:59.031486 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:20:59.031642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:20:59.037795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:20:59.037898 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:20:59.066575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:20:59.066716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:20:59.096010 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:20:59.096184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:20:59.136491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:20:59.136668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:59.448507 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Jan 30 14:20:59.178442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:20:59.207494 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:20:59.207653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:20:59.231610 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:20:59.231757 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:20:59.254570 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:20:59.254714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:20:59.274570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:20:59.274727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:59.297797 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:20:59.298121 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:20:59.319255 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:20:59.319522 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:20:59.340448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:20:59.376561 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:20:59.390411 systemd[1]: Switching root. Jan 30 14:20:59.572489 systemd-journald[269]: Journal stopped Jan 30 14:20:38.002681 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 30 14:20:38.002695 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:20:38.002701 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:20:38.002706 kernel: BIOS-provided physical RAM map: Jan 30 14:20:38.002710 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 14:20:38.002714 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 14:20:38.002719 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 14:20:38.002723 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 14:20:38.002727 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 14:20:38.002731 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b10fff] usable Jan 30 14:20:38.002735 kernel: BIOS-e820: [mem 0x0000000081b11000-0x0000000081b11fff] ACPI NVS Jan 30 14:20:38.002740 kernel: BIOS-e820: [mem 0x0000000081b12000-0x0000000081b12fff] reserved Jan 30 14:20:38.002744 kernel: BIOS-e820: [mem 0x0000000081b13000-0x000000008afccfff] usable Jan 30 14:20:38.002748 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 14:20:38.002753 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 14:20:38.002758 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 14:20:38.002763 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 14:20:38.002768 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 14:20:38.002772 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 14:20:38.002777 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 14:20:38.002781 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 14:20:38.002786 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 14:20:38.002790 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 14:20:38.002794 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 14:20:38.002799 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 14:20:38.002803 kernel: NX (Execute Disable) protection: active Jan 30 14:20:38.002808 kernel: APIC: Static calls initialized Jan 30 14:20:38.002812 kernel: SMBIOS 3.2.1 present. Jan 30 14:20:38.002818 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 14:20:38.002823 kernel: tsc: Detected 3400.000 MHz processor Jan 30 14:20:38.002827 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 14:20:38.002832 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:20:38.002837 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:20:38.002842 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 14:20:38.002846 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 14:20:38.002851 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:20:38.002856 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 14:20:38.002861 kernel: Using GB pages for direct mapping Jan 30 14:20:38.002866 kernel: ACPI: Early table checksum verification disabled Jan 30 14:20:38.002871 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 14:20:38.002877 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 14:20:38.002882 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 14:20:38.002887 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 14:20:38.002892 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 14:20:38.002898 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 14:20:38.002903 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 14:20:38.002908 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 14:20:38.002913 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 14:20:38.002918 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 14:20:38.002923 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 14:20:38.002928 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 14:20:38.002934 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 14:20:38.002939 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002944 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 14:20:38.002949 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 14:20:38.002954 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002959 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002964 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 14:20:38.002968 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 14:20:38.002973 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002979 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 14:20:38.002984 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 14:20:38.002989 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:20:38.002994 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 14:20:38.002999 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 14:20:38.003004 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 14:20:38.003009 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 14:20:38.003014 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 14:20:38.003019 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 14:20:38.003024 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 14:20:38.003029 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 14:20:38.003034 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 14:20:38.003039 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 14:20:38.003044 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 14:20:38.003049 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 14:20:38.003054 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 14:20:38.003059 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 14:20:38.003065 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 14:20:38.003070 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 14:20:38.003075 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 14:20:38.003079 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 14:20:38.003084 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 14:20:38.003089 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 14:20:38.003094 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 14:20:38.003099 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 14:20:38.003104 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 14:20:38.003109 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 14:20:38.003115 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 14:20:38.003120 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 14:20:38.003124 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 14:20:38.003129 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 14:20:38.003134 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 14:20:38.003139 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 14:20:38.003144 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 14:20:38.003149 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 14:20:38.003154 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 14:20:38.003159 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 14:20:38.003164 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 14:20:38.003169 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 14:20:38.003174 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 14:20:38.003179 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 14:20:38.003184 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 14:20:38.003189 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 14:20:38.003194 kernel: No NUMA configuration found Jan 30 14:20:38.003199 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 14:20:38.003204 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 14:20:38.003210 kernel: Zone ranges: Jan 30 14:20:38.003215 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:20:38.003220 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 14:20:38.003225 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:20:38.003230 kernel: Movable zone start for each node Jan 30 14:20:38.003234 kernel: Early memory node ranges Jan 30 14:20:38.003239 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 14:20:38.003244 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 14:20:38.003250 kernel: node 0: [mem 0x0000000040400000-0x0000000081b10fff] Jan 30 14:20:38.003255 kernel: node 0: [mem 0x0000000081b13000-0x000000008afccfff] Jan 30 14:20:38.003260 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 14:20:38.003265 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 14:20:38.003273 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 14:20:38.003279 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 14:20:38.003284 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:20:38.003290 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 14:20:38.003296 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 14:20:38.003304 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 14:20:38.003309 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 14:20:38.003314 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 14:20:38.003320 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 14:20:38.003325 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 14:20:38.003330 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 14:20:38.003336 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 14:20:38.003341 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 14:20:38.003347 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 14:20:38.003353 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 14:20:38.003358 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 14:20:38.003363 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 14:20:38.003368 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 14:20:38.003374 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 14:20:38.003379 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 14:20:38.003384 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 14:20:38.003390 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 14:20:38.003395 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 14:20:38.003401 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 14:20:38.003406 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 14:20:38.003412 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 14:20:38.003417 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 14:20:38.003422 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 14:20:38.003427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:20:38.003433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:20:38.003438 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:20:38.003443 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 14:20:38.003450 kernel: TSC deadline timer available Jan 30 14:20:38.003455 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 14:20:38.003460 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 14:20:38.003466 kernel: Booting paravirtualized kernel on bare hardware Jan 30 14:20:38.003471 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:20:38.003476 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 14:20:38.003482 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 14:20:38.003487 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 14:20:38.003492 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 14:20:38.003499 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:20:38.003505 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:20:38.003510 kernel: random: crng init done Jan 30 14:20:38.003515 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 14:20:38.003520 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 14:20:38.003526 kernel: Fallback order for Node 0: 0 Jan 30 14:20:38.003531 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 14:20:38.003536 kernel: Policy zone: Normal Jan 30 14:20:38.003543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:20:38.003548 kernel: software IO TLB: area num 16. Jan 30 14:20:38.003553 kernel: Memory: 32720296K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 732424K reserved, 0K cma-reserved) Jan 30 14:20:38.003559 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 14:20:38.003564 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:20:38.003569 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:20:38.003575 kernel: Dynamic Preempt: voluntary Jan 30 14:20:38.003580 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:20:38.003586 kernel: rcu: RCU event tracing is enabled. Jan 30 14:20:38.003592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 14:20:38.003597 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:20:38.003603 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:20:38.003608 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:20:38.003613 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:20:38.003619 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 14:20:38.003624 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 14:20:38.003629 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:20:38.003634 kernel: Console: colour dummy device 80x25 Jan 30 14:20:38.003641 kernel: printk: console [tty0] enabled Jan 30 14:20:38.003646 kernel: printk: console [ttyS1] enabled Jan 30 14:20:38.003651 kernel: ACPI: Core revision 20230628 Jan 30 14:20:38.003657 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 14:20:38.003662 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:20:38.003667 kernel: DMAR: Host address width 39 Jan 30 14:20:38.003673 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 14:20:38.003678 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 14:20:38.003683 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 14:20:38.003690 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 14:20:38.003695 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 14:20:38.003700 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 14:20:38.003706 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 14:20:38.003711 kernel: x2apic enabled Jan 30 14:20:38.003716 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 14:20:38.003722 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 14:20:38.003727 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 14:20:38.003733 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 14:20:38.003739 kernel: process: using mwait in idle threads Jan 30 14:20:38.003744 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 14:20:38.003749 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 14:20:38.003754 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:20:38.003760 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 14:20:38.003765 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 14:20:38.003770 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 14:20:38.003775 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:20:38.003781 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 14:20:38.003786 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 14:20:38.003791 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:20:38.003797 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:20:38.003802 kernel: TAA: Mitigation: TSX disabled Jan 30 14:20:38.003808 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 14:20:38.003813 kernel: SRBDS: Mitigation: Microcode Jan 30 14:20:38.003818 kernel: GDS: Mitigation: Microcode Jan 30 14:20:38.003824 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:20:38.003829 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:20:38.003834 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:20:38.003839 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 14:20:38.003844 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 14:20:38.003850 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:20:38.003856 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 14:20:38.003861 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 14:20:38.003866 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 14:20:38.003872 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:20:38.003877 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:20:38.003882 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:20:38.003888 kernel: landlock: Up and running. Jan 30 14:20:38.003893 kernel: SELinux: Initializing. Jan 30 14:20:38.003898 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.003904 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.003909 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 14:20:38.003914 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:20:38.003921 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:20:38.003926 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:20:38.003931 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 14:20:38.003937 kernel: ... version: 4 Jan 30 14:20:38.003942 kernel: ... bit width: 48 Jan 30 14:20:38.003947 kernel: ... generic registers: 4 Jan 30 14:20:38.003952 kernel: ... value mask: 0000ffffffffffff Jan 30 14:20:38.003958 kernel: ... max period: 00007fffffffffff Jan 30 14:20:38.003964 kernel: ... fixed-purpose events: 3 Jan 30 14:20:38.003969 kernel: ... event mask: 000000070000000f Jan 30 14:20:38.003974 kernel: signal: max sigframe size: 2032 Jan 30 14:20:38.003980 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 14:20:38.003985 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:20:38.003991 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:20:38.003996 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 14:20:38.004001 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:20:38.004006 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:20:38.004013 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 14:20:38.004018 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 14:20:38.004024 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 14:20:38.004029 kernel: smpboot: Max logical packages: 1 Jan 30 14:20:38.004034 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 14:20:38.004039 kernel: devtmpfs: initialized Jan 30 14:20:38.004045 kernel: x86/mm: Memory block size: 128MB Jan 30 14:20:38.004050 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b11000-0x81b11fff] (4096 bytes) Jan 30 14:20:38.004055 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 14:20:38.004062 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:20:38.004067 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 14:20:38.004072 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:20:38.004078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:20:38.004083 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:20:38.004088 kernel: audit: type=2000 audit(1738246832.039:1): state=initialized audit_enabled=0 res=1 Jan 30 14:20:38.004093 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:20:38.004099 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:20:38.004104 kernel: cpuidle: using governor menu Jan 30 14:20:38.004110 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:20:38.004115 kernel: dca service started, version 1.12.1 Jan 30 14:20:38.004121 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 14:20:38.004126 kernel: PCI: Using configuration type 1 for base access Jan 30 14:20:38.004131 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 14:20:38.004137 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:20:38.004142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:20:38.004147 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:20:38.004152 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:20:38.004159 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:20:38.004164 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:20:38.004169 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:20:38.004174 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:20:38.004180 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:20:38.004185 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 14:20:38.004190 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004196 kernel: ACPI: SSDT 0xFFFF96B801607400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 14:20:38.004201 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004207 kernel: ACPI: SSDT 0xFFFF96B8015FF000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 14:20:38.004212 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004218 kernel: ACPI: SSDT 0xFFFF96B8015E5600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 14:20:38.004223 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004228 kernel: ACPI: SSDT 0xFFFF96B8015FC000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 14:20:38.004233 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004239 kernel: ACPI: SSDT 0xFFFF96B80160A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 14:20:38.004244 kernel: ACPI: Dynamic OEM Table Load: Jan 30 14:20:38.004249 kernel: ACPI: SSDT 0xFFFF96B801606400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 14:20:38.004255 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 14:20:38.004261 kernel: ACPI: Interpreter enabled Jan 30 14:20:38.004266 kernel: ACPI: PM: (supports S0 S5) Jan 30 14:20:38.004271 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:20:38.004276 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 14:20:38.004282 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 14:20:38.004287 kernel: HEST: Table parsing has been initialized. Jan 30 14:20:38.004292 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 14:20:38.004297 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:20:38.004306 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:20:38.004311 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 14:20:38.004336 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 14:20:38.004341 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 14:20:38.004361 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 14:20:38.004366 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 14:20:38.004371 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 14:20:38.004377 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 14:20:38.004382 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 14:20:38.004387 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 14:20:38.004393 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 14:20:38.004399 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 14:20:38.004404 kernel: ACPI: \PIN_: New power resource Jan 30 14:20:38.004409 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 14:20:38.004483 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:20:38.004536 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 14:20:38.004583 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 14:20:38.004593 kernel: PCI host bridge to bus 0000:00 Jan 30 14:20:38.004642 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:20:38.004686 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:20:38.004727 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:20:38.004769 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 14:20:38.004810 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 14:20:38.004851 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 14:20:38.004912 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 14:20:38.004970 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 14:20:38.005019 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.005072 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 14:20:38.005119 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 14:20:38.005171 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 14:20:38.005222 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 14:20:38.005275 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 14:20:38.005326 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 14:20:38.005374 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 14:20:38.005425 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 14:20:38.005472 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 14:20:38.005522 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 14:20:38.005573 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 14:20:38.005621 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:20:38.005673 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 14:20:38.005721 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:20:38.005771 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 14:20:38.005821 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 14:20:38.005870 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 14:20:38.005928 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 14:20:38.005979 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 14:20:38.006025 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 14:20:38.006078 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 14:20:38.006125 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 14:20:38.006175 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 14:20:38.006225 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 14:20:38.006275 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 14:20:38.006414 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 14:20:38.006465 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 14:20:38.006523 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 14:20:38.006571 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 14:20:38.006623 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 14:20:38.006669 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 14:20:38.006721 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 14:20:38.006770 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.006825 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 14:20:38.006875 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.006927 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 14:20:38.006976 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.007027 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 14:20:38.007076 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.007130 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 14:20:38.007179 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.007230 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 14:20:38.007279 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 14:20:38.007340 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 14:20:38.007392 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 14:20:38.007444 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 14:20:38.007491 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 14:20:38.007543 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 14:20:38.007591 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 14:20:38.007647 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 14:20:38.007697 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 14:20:38.007749 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 14:20:38.007798 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 14:20:38.007847 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:20:38.007897 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:20:38.007950 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 14:20:38.007999 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 14:20:38.008048 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 14:20:38.008100 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 14:20:38.008149 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 14:20:38.008198 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 14:20:38.008247 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:20:38.008297 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:20:38.008351 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:20:38.008400 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:20:38.008454 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 14:20:38.008507 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:20:38.008556 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 14:20:38.008604 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 14:20:38.008654 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 14:20:38.008702 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.008751 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:20:38.008800 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:20:38.008851 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:20:38.008907 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 14:20:38.008956 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 14:20:38.009006 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 14:20:38.009055 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 14:20:38.009104 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 14:20:38.009153 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 14:20:38.009205 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:20:38.009252 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:20:38.009304 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:20:38.009354 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:20:38.009407 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 14:20:38.009457 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 14:20:38.009505 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 14:20:38.009555 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:20:38.009606 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:20:38.009655 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.009702 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.009756 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 14:20:38.009813 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 14:20:38.009865 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 14:20:38.009916 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 14:20:38.009969 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 14:20:38.010021 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:20:38.010072 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 14:20:38.010123 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:20:38.010173 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:20:38.010223 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.010273 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.010281 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 14:20:38.010289 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 14:20:38.010294 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 14:20:38.010304 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 14:20:38.010329 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 14:20:38.010334 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 14:20:38.010340 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 14:20:38.010359 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 14:20:38.010365 kernel: iommu: Default domain type: Translated Jan 30 14:20:38.010371 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:20:38.010378 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:20:38.010383 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:20:38.010389 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 14:20:38.010395 kernel: e820: reserve RAM buffer [mem 0x81b11000-0x83ffffff] Jan 30 14:20:38.010400 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 14:20:38.010406 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 14:20:38.010411 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 14:20:38.010416 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 14:20:38.010469 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 14:20:38.010521 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 14:20:38.010573 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:20:38.010582 kernel: vgaarb: loaded Jan 30 14:20:38.010588 kernel: clocksource: Switched to clocksource tsc-early Jan 30 14:20:38.010593 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:20:38.010599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:20:38.010605 kernel: pnp: PnP ACPI init Jan 30 14:20:38.010653 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 14:20:38.010706 kernel: pnp 00:02: [dma 0 disabled] Jan 30 14:20:38.010754 kernel: pnp 00:03: [dma 0 disabled] Jan 30 14:20:38.010801 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 14:20:38.010845 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 14:20:38.010893 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 14:20:38.010939 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 14:20:38.010986 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 14:20:38.011030 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 14:20:38.011074 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 14:20:38.011120 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 14:20:38.011164 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 14:20:38.011207 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 14:20:38.011252 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 14:20:38.011326 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 14:20:38.011386 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 14:20:38.011430 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 14:20:38.011473 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 14:20:38.011517 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 14:20:38.011559 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 14:20:38.011603 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 14:20:38.011652 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 14:20:38.011661 kernel: pnp: PnP ACPI: found 10 devices Jan 30 14:20:38.011667 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:20:38.011672 kernel: NET: Registered PF_INET protocol family Jan 30 14:20:38.011678 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:20:38.011684 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 14:20:38.011689 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:20:38.011695 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:20:38.011702 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 14:20:38.011708 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 14:20:38.011714 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.011719 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:20:38.011725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:20:38.011731 kernel: NET: Registered PF_XDP protocol family Jan 30 14:20:38.011779 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 14:20:38.011828 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 14:20:38.011879 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 14:20:38.011929 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:20:38.011980 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:20:38.012030 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 14:20:38.012078 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 14:20:38.012127 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 14:20:38.012174 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 14:20:38.012222 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:20:38.012271 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 14:20:38.012344 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 14:20:38.012409 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 14:20:38.012457 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 14:20:38.012505 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 14:20:38.012557 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 14:20:38.012604 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 14:20:38.012652 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 14:20:38.012700 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 14:20:38.012751 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.012800 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.012847 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 14:20:38.012896 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 14:20:38.012943 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 14:20:38.012990 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 14:20:38.013032 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:20:38.013075 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:20:38.013116 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:20:38.013159 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 14:20:38.013200 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 14:20:38.013251 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 14:20:38.013297 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 14:20:38.013383 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 14:20:38.013427 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 14:20:38.013476 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:20:38.013519 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 14:20:38.013568 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 14:20:38.013614 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:20:38.013660 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 14:20:38.013706 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 14:20:38.013713 kernel: PCI: CLS 64 bytes, default 64 Jan 30 14:20:38.013719 kernel: DMAR: No ATSR found Jan 30 14:20:38.013725 kernel: DMAR: No SATC found Jan 30 14:20:38.013731 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 14:20:38.013778 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 14:20:38.013830 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 14:20:38.013877 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 14:20:38.013926 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 14:20:38.013972 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 14:20:38.014020 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 14:20:38.014066 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 14:20:38.014113 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 14:20:38.014160 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 14:20:38.014207 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 14:20:38.014256 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 14:20:38.014307 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 14:20:38.014389 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 14:20:38.014437 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 14:20:38.014485 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 14:20:38.014532 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 14:20:38.014579 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 14:20:38.014626 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 14:20:38.014677 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 14:20:38.014724 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 14:20:38.014772 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 14:20:38.014821 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 14:20:38.014870 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 14:20:38.014919 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 14:20:38.014969 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 14:20:38.015018 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 14:20:38.015070 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 14:20:38.015078 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 14:20:38.015084 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:20:38.015090 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 14:20:38.015096 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 14:20:38.015102 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 14:20:38.015107 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 14:20:38.015113 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 14:20:38.015166 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 14:20:38.015176 kernel: Initialise system trusted keyrings Jan 30 14:20:38.015182 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 14:20:38.015188 kernel: Key type asymmetric registered Jan 30 14:20:38.015193 kernel: Asymmetric key parser 'x509' registered Jan 30 14:20:38.015199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:20:38.015205 kernel: io scheduler mq-deadline registered Jan 30 14:20:38.015210 kernel: io scheduler kyber registered Jan 30 14:20:38.015216 kernel: io scheduler bfq registered Jan 30 14:20:38.015263 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 14:20:38.015336 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 14:20:38.015399 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 14:20:38.015448 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 14:20:38.015495 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 14:20:38.015543 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 14:20:38.015595 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 14:20:38.015606 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 14:20:38.015612 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 14:20:38.015617 kernel: pstore: Using crash dump compression: deflate Jan 30 14:20:38.015623 kernel: pstore: Registered erst as persistent store backend Jan 30 14:20:38.015629 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:20:38.015635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:20:38.015640 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:20:38.015646 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 14:20:38.015652 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 14:20:38.015701 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 14:20:38.015709 kernel: i8042: PNP: No PS/2 controller found. Jan 30 14:20:38.015752 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 14:20:38.015797 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 14:20:38.015841 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T14:20:36 UTC (1738246836) Jan 30 14:20:38.015884 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 14:20:38.015892 kernel: intel_pstate: Intel P-state driver initializing Jan 30 14:20:38.015898 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 14:20:38.015906 kernel: intel_pstate: HWP enabled Jan 30 14:20:38.015911 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 30 14:20:38.015917 kernel: vesafb: scrolling: redraw Jan 30 14:20:38.015923 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 30 14:20:38.015929 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000075929ad5, using 768k, total 768k Jan 30 14:20:38.015934 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:20:38.015940 kernel: fb0: VESA VGA frame buffer device Jan 30 14:20:38.015945 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:20:38.015951 kernel: Segment Routing with IPv6 Jan 30 14:20:38.015958 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:20:38.015964 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:20:38.015969 kernel: Key type dns_resolver registered Jan 30 14:20:38.015975 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 14:20:38.015981 kernel: IPI shorthand broadcast: enabled Jan 30 14:20:38.015986 kernel: sched_clock: Marking stable (2475001102, 1384805382)->(4404140979, -544334495) Jan 30 14:20:38.015992 kernel: registered taskstats version 1 Jan 30 14:20:38.015997 kernel: Loading compiled-in X.509 certificates Jan 30 14:20:38.016003 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:20:38.016010 kernel: Key type .fscrypt registered Jan 30 14:20:38.016015 kernel: Key type fscrypt-provisioning registered Jan 30 14:20:38.016021 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:20:38.016027 kernel: ima: No architecture policies found Jan 30 14:20:38.016032 kernel: clk: Disabling unused clocks Jan 30 14:20:38.016038 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:20:38.016044 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:20:38.016049 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:20:38.016055 kernel: Run /init as init process Jan 30 14:20:38.016061 kernel: with arguments: Jan 30 14:20:38.016067 kernel: /init Jan 30 14:20:38.016073 kernel: with environment: Jan 30 14:20:38.016078 kernel: HOME=/ Jan 30 14:20:38.016084 kernel: TERM=linux Jan 30 14:20:38.016089 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:20:38.016096 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:20:38.016104 systemd[1]: Detected architecture x86-64. Jan 30 14:20:38.016110 systemd[1]: Running in initrd. Jan 30 14:20:38.016116 systemd[1]: No hostname configured, using default hostname. Jan 30 14:20:38.016122 systemd[1]: Hostname set to . Jan 30 14:20:38.016127 systemd[1]: Initializing machine ID from random generator. Jan 30 14:20:38.016133 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:20:38.016139 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:20:38.016145 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:20:38.016152 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:20:38.016158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:20:38.016164 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:20:38.016170 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:20:38.016177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:20:38.016183 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:20:38.016189 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 30 14:20:38.016196 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 30 14:20:38.016201 kernel: clocksource: Switched to clocksource tsc Jan 30 14:20:38.016207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:20:38.016213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:20:38.016219 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:20:38.016225 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:20:38.016231 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:20:38.016237 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:20:38.016243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:20:38.016250 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:20:38.016256 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:20:38.016261 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:20:38.016267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:20:38.016273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:20:38.016279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:20:38.016285 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:20:38.016291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:20:38.016298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:20:38.016307 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:20:38.016313 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:20:38.016340 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:20:38.016371 systemd-journald[269]: Collecting audit messages is disabled. Jan 30 14:20:38.016386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:20:38.016393 systemd-journald[269]: Journal started Jan 30 14:20:38.016406 systemd-journald[269]: Runtime Journal (/run/log/journal/6d43a6d87896443cbc8fa9a1913a8bb2) is 8.0M, max 639.9M, 631.9M free. Jan 30 14:20:38.039432 systemd-modules-load[272]: Inserted module 'overlay' Jan 30 14:20:38.061303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:20:38.089813 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:20:38.154560 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:20:38.154600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:20:38.154618 kernel: Bridge firewalling registered Jan 30 14:20:38.132771 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:20:38.151282 systemd-modules-load[272]: Inserted module 'br_netfilter' Jan 30 14:20:38.165703 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:20:38.184678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:20:38.192662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:38.223660 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:20:38.227284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:20:38.244057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:20:38.244508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:20:38.247720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:20:38.249021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:20:38.249735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:20:38.250863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:20:38.252022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:20:38.255715 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:20:38.259570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:38.260362 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:20:38.271427 systemd-resolved[300]: Positive Trust Anchors: Jan 30 14:20:38.271432 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:20:38.271454 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:20:38.392662 dracut-cmdline[308]: dracut-dracut-053 Jan 30 14:20:38.392662 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:20:38.273011 systemd-resolved[300]: Defaulting to hostname 'linux'. Jan 30 14:20:38.290552 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:20:38.290676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:20:38.551351 kernel: SCSI subsystem initialized Jan 30 14:20:38.573332 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:20:38.596321 kernel: iscsi: registered transport (tcp) Jan 30 14:20:38.628284 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:20:38.628305 kernel: QLogic iSCSI HBA Driver Jan 30 14:20:38.661593 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:20:38.684588 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:20:38.739038 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:20:38.739057 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:20:38.758691 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:20:38.816339 kernel: raid6: avx2x4 gen() 53356 MB/s Jan 30 14:20:38.848377 kernel: raid6: avx2x2 gen() 53894 MB/s Jan 30 14:20:38.884751 kernel: raid6: avx2x1 gen() 45251 MB/s Jan 30 14:20:38.884769 kernel: raid6: using algorithm avx2x2 gen() 53894 MB/s Jan 30 14:20:38.931801 kernel: raid6: .... xor() 30511 MB/s, rmw enabled Jan 30 14:20:38.931820 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:20:38.972328 kernel: xor: automatically using best checksumming function avx Jan 30 14:20:39.089335 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:20:39.095022 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:20:39.122614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:20:39.129191 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jan 30 14:20:39.133417 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:20:39.169525 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:20:39.205161 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 30 14:20:39.221991 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:20:39.247658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:20:39.332109 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:20:39.364660 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:20:39.364704 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:20:39.375352 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:20:39.395306 kernel: libata version 3.00 loaded. Jan 30 14:20:39.408307 kernel: ACPI: bus type USB registered Jan 30 14:20:39.408345 kernel: PTP clock support registered Jan 30 14:20:39.408360 kernel: usbcore: registered new interface driver usbfs Jan 30 14:20:39.404811 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:20:39.485536 kernel: usbcore: registered new interface driver hub Jan 30 14:20:39.485552 kernel: usbcore: registered new device driver usb Jan 30 14:20:39.485560 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:20:39.470167 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:20:39.501279 kernel: AES CTR mode by8 optimization enabled Jan 30 14:20:39.501297 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 14:20:39.836622 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:20:39.836718 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 14:20:39.836784 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 14:20:39.836846 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 14:20:39.836907 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 14:20:39.836966 kernel: scsi host0: ahci Jan 30 14:20:39.837038 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 14:20:39.837100 kernel: scsi host1: ahci Jan 30 14:20:39.837159 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 14:20:39.837219 kernel: scsi host2: ahci Jan 30 14:20:39.837279 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 14:20:39.837348 kernel: scsi host3: ahci Jan 30 14:20:39.837410 kernel: hub 1-0:1.0: USB hub found Jan 30 14:20:39.837479 kernel: scsi host4: ahci Jan 30 14:20:39.837537 kernel: hub 1-0:1.0: 16 ports detected Jan 30 14:20:39.837595 kernel: scsi host5: ahci Jan 30 14:20:39.837658 kernel: hub 2-0:1.0: USB hub found Jan 30 14:20:39.837728 kernel: scsi host6: ahci Jan 30 14:20:39.837793 kernel: hub 2-0:1.0: 10 ports detected Jan 30 14:20:39.837859 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 14:20:39.837868 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 14:20:39.837876 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 14:20:39.837883 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 14:20:39.837890 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 14:20:39.837898 kernel: pps pps0: new PPS source ptp0 Jan 30 14:20:39.837962 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 14:20:39.837970 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 14:20:40.030711 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 14:20:40.030721 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:20:40.030793 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 14:20:40.077765 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 14:20:40.077776 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 14:20:40.077783 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d4:36 Jan 30 14:20:40.077858 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 14:20:40.077923 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:20:40.077988 kernel: hub 1-14:1.0: USB hub found Jan 30 14:20:40.078063 kernel: hub 1-14:1.0: 4 ports detected Jan 30 14:20:40.078126 kernel: pps pps1: new PPS source ptp1 Jan 30 14:20:39.501133 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:20:40.184923 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 14:20:40.185123 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 14:20:40.185305 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d4:37 Jan 30 14:20:40.185477 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 14:20:40.185635 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.185653 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 14:20:40.185809 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:39.623368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:20:40.220801 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.220815 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.022418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:20:40.313236 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:20:40.313252 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 14:20:40.313262 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 14:20:40.313274 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:20:40.313286 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 14:20:40.051420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:20:40.378512 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:20:40.378526 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Jan 30 14:20:41.193467 kernel: ata1.00: Features: NCQ-prio Jan 30 14:20:41.193478 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:20:41.193553 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 14:20:41.193562 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 14:20:41.193672 kernel: ata2.00: Features: NCQ-prio Jan 30 14:20:41.193681 kernel: ata1.00: configured for UDMA/133 Jan 30 14:20:41.193688 kernel: ata2.00: configured for UDMA/133 Jan 30 14:20:41.193695 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:20:41.193767 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 14:20:41.193832 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:20:41.193841 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 14:20:41.193908 kernel: usbcore: registered new interface driver usbhid Jan 30 14:20:41.193916 kernel: usbhid: USB HID core driver Jan 30 14:20:41.193924 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 14:20:41.193931 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 14:20:41.193996 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.194006 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:20:41.194073 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:20:41.194081 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:20:41.194141 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 14:20:41.194200 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:20:41.194259 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 30 14:20:41.194333 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 14:20:41.194395 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:20:41.194455 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 14:20:41.194514 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 14:20:41.194522 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 30 14:20:41.194578 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 14:20:41.194641 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 14:20:41.194712 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 30 14:20:41.194771 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 14:20:41.194781 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 30 14:20:41.194838 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 14:20:41.194905 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 14:20:41.194962 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:20:41.195019 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 14:20:41.195079 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.195088 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:20:41.195097 kernel: GPT:9289727 != 937703087 Jan 30 14:20:41.195104 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:20:41.195111 kernel: GPT:9289727 != 937703087 Jan 30 14:20:41.195118 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:20:41.195125 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.195132 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 30 14:20:41.195189 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (546) Jan 30 14:20:41.195197 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (552) Jan 30 14:20:41.195205 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:20:41.195268 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Jan 30 14:20:41.806007 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 14:20:41.806363 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.806402 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.806443 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.806477 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.806511 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:41.806551 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:41.806579 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 30 14:20:41.806911 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 14:20:41.807243 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 14:20:40.051456 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:40.200204 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:20:41.847432 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 14:20:41.847596 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 14:20:40.442439 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:20:40.489419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:20:40.489454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:40.512398 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:20:41.893394 disk-uuid[708]: Primary Header is updated. Jan 30 14:20:41.893394 disk-uuid[708]: Secondary Entries is updated. Jan 30 14:20:41.893394 disk-uuid[708]: Secondary Header is updated. Jan 30 14:20:40.536835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:20:40.575515 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:20:40.970630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:41.090462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:20:41.103528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:41.137574 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 30 14:20:41.181977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 30 14:20:41.216421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:20:41.245373 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 14:20:41.259961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 14:20:41.292433 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:20:42.389698 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 14:20:42.409343 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 30 14:20:42.409360 disk-uuid[709]: The operation has completed successfully. Jan 30 14:20:42.445076 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:20:42.445124 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:20:42.500596 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:20:42.537375 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:20:42.537431 sh[738]: Success Jan 30 14:20:42.572481 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:20:42.594310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:20:42.602646 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:20:42.653268 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:20:42.653288 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:42.674668 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:20:42.693664 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:20:42.711637 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:20:42.749338 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:20:42.751873 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:20:42.760598 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:20:42.769540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:20:42.876570 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:42.876589 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:42.876597 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:20:42.876604 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:20:42.876611 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:20:42.810337 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:20:42.913558 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:42.913615 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:20:42.924133 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:20:42.962160 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:20:42.970622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:20:43.000686 ignition[826]: Ignition 2.19.0 Jan 30 14:20:43.000692 ignition[826]: Stage: fetch-offline Jan 30 14:20:43.002852 unknown[826]: fetched base config from "system" Jan 30 14:20:43.000716 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:43.002856 unknown[826]: fetched user config from "system" Jan 30 14:20:43.000724 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:43.022665 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:20:43.000795 ignition[826]: parsed url from cmdline: "" Jan 30 14:20:43.036333 systemd-networkd[921]: lo: Link UP Jan 30 14:20:43.000799 ignition[826]: no config URL provided Jan 30 14:20:43.036336 systemd-networkd[921]: lo: Gained carrier Jan 30 14:20:43.000802 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:20:43.039420 systemd-networkd[921]: Enumeration completed Jan 30 14:20:43.000825 ignition[826]: parsing config with SHA512: 74ca5d76fb651e948060909f1be80c0f5d3c4a53701845a2a7c0a287c56e09413bcda56eed69f4bded0cae42e5348479b42d2dd22ccc75755d71666fe457e2d8 Jan 30 14:20:43.039511 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:20:43.003067 ignition[826]: fetch-offline: fetch-offline passed Jan 30 14:20:43.040448 systemd-networkd[921]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.003070 ignition[826]: POST message to Packet Timeline Jan 30 14:20:43.054518 systemd[1]: Reached target network.target - Network. Jan 30 14:20:43.003072 ignition[826]: POST Status error: resource requires networking Jan 30 14:20:43.068180 systemd-networkd[921]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.003111 ignition[826]: Ignition finished successfully Jan 30 14:20:43.071429 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 14:20:43.110198 ignition[934]: Ignition 2.19.0 Jan 30 14:20:43.089614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:20:43.110212 ignition[934]: Stage: kargs Jan 30 14:20:43.283422 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 14:20:43.097071 systemd-networkd[921]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.110567 ignition[934]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:43.279471 systemd-networkd[921]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:20:43.110589 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:43.112292 ignition[934]: kargs: kargs passed Jan 30 14:20:43.112322 ignition[934]: POST message to Packet Timeline Jan 30 14:20:43.112351 ignition[934]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:43.113495 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48797->[::1]:53: read: connection refused Jan 30 14:20:43.313837 ignition[934]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 14:20:43.314694 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33827->[::1]:53: read: connection refused Jan 30 14:20:43.507440 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 14:20:43.507941 systemd-networkd[921]: eno1: Link UP Jan 30 14:20:43.508109 systemd-networkd[921]: eno2: Link UP Jan 30 14:20:43.508229 systemd-networkd[921]: enp1s0f0np0: Link UP Jan 30 14:20:43.508389 systemd-networkd[921]: enp1s0f0np0: Gained carrier Jan 30 14:20:43.517471 systemd-networkd[921]: enp1s0f1np1: Link UP Jan 30 14:20:43.539393 systemd-networkd[921]: enp1s0f0np0: DHCPv4 address 139.178.70.237/31, gateway 139.178.70.236 acquired from 145.40.83.140 Jan 30 14:20:43.715112 ignition[934]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 14:20:43.716017 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58646->[::1]:53: read: connection refused Jan 30 14:20:44.307056 systemd-networkd[921]: enp1s0f1np1: Gained carrier Jan 30 14:20:44.516644 ignition[934]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 14:20:44.517824 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48756->[::1]:53: read: connection refused Jan 30 14:20:44.562914 systemd-networkd[921]: enp1s0f0np0: Gained IPv6LL Jan 30 14:20:45.650919 systemd-networkd[921]: enp1s0f1np1: Gained IPv6LL Jan 30 14:20:46.119570 ignition[934]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 14:20:46.120735 ignition[934]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53399->[::1]:53: read: connection refused Jan 30 14:20:49.324331 ignition[934]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 14:20:50.450888 ignition[934]: GET result: OK Jan 30 14:20:50.835045 ignition[934]: Ignition finished successfully Jan 30 14:20:50.839941 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:20:50.864638 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:20:50.870817 ignition[953]: Ignition 2.19.0 Jan 30 14:20:50.870821 ignition[953]: Stage: disks Jan 30 14:20:50.870923 ignition[953]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:50.870929 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:50.871426 ignition[953]: disks: disks passed Jan 30 14:20:50.871429 ignition[953]: POST message to Packet Timeline Jan 30 14:20:50.871437 ignition[953]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:51.111685 ignition[953]: GET result: OK Jan 30 14:20:51.507483 ignition[953]: Ignition finished successfully Jan 30 14:20:51.510893 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:20:51.526535 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:20:51.544578 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:20:51.565542 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:20:51.586691 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:20:51.606696 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:20:51.635574 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:20:51.670499 systemd-fsck[971]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:20:51.682036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:20:51.707545 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:20:51.800368 kernel: EXT4-fs (sdb9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:20:51.800883 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:20:51.810717 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:20:51.851775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:20:51.861281 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:20:51.975700 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (980) Jan 30 14:20:51.975715 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:51.975723 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:51.975730 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:20:51.975737 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:20:51.975744 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:20:51.900943 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:20:51.976075 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 14:20:52.006418 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:20:52.043603 coreos-metadata[982]: Jan 30 14:20:52.037 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:20:52.006440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:20:52.083471 coreos-metadata[998]: Jan 30 14:20:52.037 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:20:52.026332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:20:52.052679 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:20:52.086550 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:20:52.131428 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:20:52.141419 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:20:52.152356 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:20:52.163347 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:20:52.169579 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:20:52.180610 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:20:52.216335 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:20:52.234500 kernel: BTRFS info (device sdb6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:52.227134 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:20:52.242513 ignition[1100]: INFO : Ignition 2.19.0 Jan 30 14:20:52.242513 ignition[1100]: INFO : Stage: mount Jan 30 14:20:52.242513 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:52.242513 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:52.242513 ignition[1100]: INFO : mount: mount passed Jan 30 14:20:52.242513 ignition[1100]: INFO : POST message to Packet Timeline Jan 30 14:20:52.242513 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:52.312434 coreos-metadata[998]: Jan 30 14:20:52.293 INFO Fetch successful Jan 30 14:20:52.252695 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:20:52.354544 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 14:20:52.354610 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 14:20:52.468110 coreos-metadata[982]: Jan 30 14:20:52.468 INFO Fetch successful Jan 30 14:20:52.541307 coreos-metadata[982]: Jan 30 14:20:52.541 INFO wrote hostname ci-4081.3.0-a-b3fea05ed8 to /sysroot/etc/hostname Jan 30 14:20:52.542796 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:20:52.885101 ignition[1100]: INFO : GET result: OK Jan 30 14:20:53.185165 ignition[1100]: INFO : Ignition finished successfully Jan 30 14:20:53.187894 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:20:53.219530 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:20:53.229451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:20:53.298394 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1123) Jan 30 14:20:53.298421 kernel: BTRFS info (device sdb6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:20:53.317717 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:20:53.334896 kernel: BTRFS info (device sdb6): using free space tree Jan 30 14:20:53.371589 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 30 14:20:53.371612 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 30 14:20:53.383974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:20:53.415998 ignition[1140]: INFO : Ignition 2.19.0 Jan 30 14:20:53.415998 ignition[1140]: INFO : Stage: files Jan 30 14:20:53.429568 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:53.429568 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:53.429568 ignition[1140]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:20:53.429568 ignition[1140]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 14:20:53.429568 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 14:20:53.419990 unknown[1140]: wrote ssh authorized keys file for user: core Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:53.561465 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:53.812641 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 14:20:54.061688 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:20:54.226996 ignition[1140]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:20:54.226996 ignition[1140]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:20:54.257522 ignition[1140]: INFO : files: files passed Jan 30 14:20:54.257522 ignition[1140]: INFO : POST message to Packet Timeline Jan 30 14:20:54.257522 ignition[1140]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:54.872036 ignition[1140]: INFO : GET result: OK Jan 30 14:20:55.714967 ignition[1140]: INFO : Ignition finished successfully Jan 30 14:20:55.718254 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:20:55.745588 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:20:55.755892 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:20:55.765692 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:20:55.765748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:20:55.832817 initrd-setup-root-after-ignition[1179]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:20:55.832817 initrd-setup-root-after-ignition[1179]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:20:55.847708 initrd-setup-root-after-ignition[1183]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:20:55.837359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:20:55.872639 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:20:55.916791 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:20:55.960417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:20:55.960467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:20:55.979708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:20:56.000497 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:20:56.021600 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:20:56.032588 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:20:56.106858 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:20:56.135748 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:20:56.164598 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:20:56.176919 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:20:56.197989 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:20:56.215918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:20:56.216348 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:20:56.245030 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:20:56.266920 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:20:56.284925 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:20:56.303919 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:20:56.324913 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:20:56.345912 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:20:56.365919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:20:56.386952 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:20:56.408931 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:20:56.428911 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:20:56.446798 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:20:56.447201 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:20:56.482778 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:20:56.492934 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:20:56.514788 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:20:56.515249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:20:56.537782 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:20:56.538181 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:20:56.569867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:20:56.570339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:20:56.590120 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:20:56.608781 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:20:56.609241 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:20:56.630003 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:20:56.648022 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:20:56.665945 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:20:56.666251 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:20:56.686043 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:20:56.686381 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:20:56.709072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:20:56.709508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:20:56.728098 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:20:56.827587 ignition[1203]: INFO : Ignition 2.19.0 Jan 30 14:20:56.827587 ignition[1203]: INFO : Stage: umount Jan 30 14:20:56.827587 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:20:56.827587 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 14:20:56.827587 ignition[1203]: INFO : umount: umount passed Jan 30 14:20:56.827587 ignition[1203]: INFO : POST message to Packet Timeline Jan 30 14:20:56.827587 ignition[1203]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 14:20:56.728508 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:20:56.745981 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:20:56.746394 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:20:56.776435 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:20:56.799026 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:20:56.816446 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:20:56.816589 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:20:56.846956 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:20:56.847352 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:20:56.897738 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:20:56.902353 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:20:56.902605 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:20:56.953774 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:20:56.953850 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:20:58.316019 ignition[1203]: INFO : GET result: OK Jan 30 14:20:58.655487 ignition[1203]: INFO : Ignition finished successfully Jan 30 14:20:58.658848 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:20:58.659103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:20:58.675178 systemd[1]: Stopped target network.target - Network. Jan 30 14:20:58.691512 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:20:58.691766 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:20:58.709714 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:20:58.709853 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:20:58.727799 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:20:58.727957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:20:58.745770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:20:58.745938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:20:58.763800 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:20:58.763968 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:20:58.782121 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:20:58.791457 systemd-networkd[921]: enp1s0f1np1: DHCPv6 lease lost Jan 30 14:20:58.799555 systemd-networkd[921]: enp1s0f0np0: DHCPv6 lease lost Jan 30 14:20:58.799788 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:20:58.818407 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:20:58.818682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:20:58.837635 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:20:58.837988 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:20:58.858064 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:20:58.858188 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:20:58.887552 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:20:58.896500 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:20:58.896661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:20:58.906878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:20:58.907042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:20:58.936636 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:20:58.936786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:20:58.955695 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:20:58.955859 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:20:58.974944 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:20:58.997607 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:20:58.997976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:20:59.031486 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:20:59.031642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:20:59.037795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:20:59.037898 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:20:59.066575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:20:59.066716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:20:59.096010 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:20:59.096184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:20:59.136491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:20:59.136668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:20:59.448507 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Jan 30 14:20:59.178442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:20:59.207494 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:20:59.207653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:20:59.231610 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:20:59.231757 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:20:59.254570 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:20:59.254714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:20:59.274570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:20:59.274727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:20:59.297797 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:20:59.298121 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:20:59.319255 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:20:59.319522 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:20:59.340448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:20:59.376561 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:20:59.390411 systemd[1]: Switching root. Jan 30 14:20:59.572489 systemd-journald[269]: Journal stopped Jan 30 14:21:02.205758 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:21:02.205773 kernel: SELinux: policy capability open_perms=1 Jan 30 14:21:02.205780 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:21:02.205786 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:21:02.205791 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:21:02.205797 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:21:02.205803 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:21:02.205808 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:21:02.205813 kernel: audit: type=1403 audit(1738246859.769:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:21:02.205820 systemd[1]: Successfully loaded SELinux policy in 155.484ms. Jan 30 14:21:02.205827 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.202ms. Jan 30 14:21:02.205834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:21:02.205840 systemd[1]: Detected architecture x86-64. Jan 30 14:21:02.205846 systemd[1]: Detected first boot. Jan 30 14:21:02.205852 systemd[1]: Hostname set to . Jan 30 14:21:02.205860 systemd[1]: Initializing machine ID from random generator. Jan 30 14:21:02.205866 zram_generator::config[1257]: No configuration found. Jan 30 14:21:02.205872 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:21:02.205878 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:21:02.205884 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:21:02.205890 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:21:02.205897 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:21:02.205904 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:21:02.205910 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:21:02.205917 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:21:02.205923 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:21:02.205929 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:21:02.205935 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:21:02.205941 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:21:02.205949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:21:02.205955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:21:02.205961 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:21:02.205967 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:21:02.205974 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:21:02.205980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:21:02.205986 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 30 14:21:02.205992 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:21:02.205999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:21:02.206005 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:21:02.206012 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:21:02.206020 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:21:02.206026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:21:02.206032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:21:02.206039 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:21:02.206046 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:21:02.206053 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:21:02.206059 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:21:02.206065 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:21:02.206072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:21:02.206078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:21:02.206086 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:21:02.206093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:21:02.206099 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:21:02.206105 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:21:02.206112 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:02.206118 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:21:02.206125 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:21:02.206133 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:21:02.206139 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:21:02.206147 systemd[1]: Reached target machines.target - Containers. Jan 30 14:21:02.206153 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:21:02.206160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:02.206166 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:21:02.206173 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:21:02.206180 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:21:02.206186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:21:02.206194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:21:02.206200 kernel: ACPI: bus type drm_connector registered Jan 30 14:21:02.206206 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:21:02.206212 kernel: fuse: init (API version 7.39) Jan 30 14:21:02.206219 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:21:02.206225 kernel: loop: module loaded Jan 30 14:21:02.206231 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:21:02.206238 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:21:02.206245 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:21:02.206252 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:21:02.206258 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:21:02.206265 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:21:02.206279 systemd-journald[1360]: Collecting audit messages is disabled. Jan 30 14:21:02.206294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:21:02.206304 systemd-journald[1360]: Journal started Jan 30 14:21:02.206337 systemd-journald[1360]: Runtime Journal (/run/log/journal/8c5bb34cf8c14c8e9512c5c3c00aa1a8) is 8.0M, max 639.9M, 631.9M free. Jan 30 14:21:00.280626 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:21:00.299530 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Jan 30 14:21:00.299827 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:21:02.259446 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:21:02.293382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:21:02.327361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:21:02.361024 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:21:02.361053 systemd[1]: Stopped verity-setup.service. Jan 30 14:21:02.423348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:02.444496 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:21:02.453871 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:21:02.463559 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:21:02.473540 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:21:02.483562 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:21:02.493571 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:21:02.503568 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:21:02.513627 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:21:02.524626 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:21:02.536668 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:21:02.536738 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:21:02.548656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:21:02.548726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:21:02.560628 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:21:02.560699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:21:02.570628 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:21:02.570698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:21:02.582638 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:21:02.582707 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:21:02.592638 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:21:02.592719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:21:02.602663 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:21:02.612681 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:21:02.623719 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:21:02.634821 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:21:02.669619 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:21:02.705696 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:21:02.716213 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:21:02.726477 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:21:02.726502 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:21:02.737386 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:21:02.763840 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:21:02.777319 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:21:02.787837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:02.791271 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:21:02.804154 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:21:02.816431 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:21:02.817197 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:21:02.822578 systemd-journald[1360]: Time spent on flushing to /var/log/journal/8c5bb34cf8c14c8e9512c5c3c00aa1a8 is 12.934ms for 1371 entries. Jan 30 14:21:02.822578 systemd-journald[1360]: System Journal (/var/log/journal/8c5bb34cf8c14c8e9512c5c3c00aa1a8) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:21:02.867662 systemd-journald[1360]: Received client request to flush runtime journal. Jan 30 14:21:02.848425 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:21:02.858800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:21:02.869062 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:21:02.882189 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:21:02.899040 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:21:02.907318 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 14:21:02.907960 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:21:02.923757 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 30 14:21:02.923767 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 30 14:21:02.933728 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:21:02.947307 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:21:02.958754 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:21:02.969547 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:21:02.980513 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:21:02.997048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:21:03.010307 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 14:21:03.020629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:21:03.034311 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:21:03.053532 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:21:03.071166 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:21:03.077309 kernel: loop2: detected capacity change from 0 to 218376 Jan 30 14:21:03.086860 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:21:03.087453 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:21:03.098856 udevadm[1396]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:21:03.103746 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:21:03.124699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:21:03.132204 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 30 14:21:03.132214 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 30 14:21:03.141623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:21:03.147307 kernel: loop3: detected capacity change from 0 to 8 Jan 30 14:21:03.173376 ldconfig[1386]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:21:03.174582 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:21:03.196362 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 14:21:03.257354 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 14:21:03.257939 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:21:03.288366 kernel: loop6: detected capacity change from 0 to 218376 Jan 30 14:21:03.294449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:21:03.306229 systemd-udevd[1421]: Using default interface naming scheme 'v255'. Jan 30 14:21:03.318169 (sd-merge)[1418]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 30 14:21:03.318362 kernel: loop7: detected capacity change from 0 to 8 Jan 30 14:21:03.318433 (sd-merge)[1418]: Merged extensions into '/usr'. Jan 30 14:21:03.320593 systemd[1]: Reloading requested from client PID 1392 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:21:03.320599 systemd[1]: Reloading... Jan 30 14:21:03.355373 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1425) Jan 30 14:21:03.355441 zram_generator::config[1507]: No configuration found. Jan 30 14:21:03.404318 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:21:03.404403 kernel: IPMI message handler: version 39.2 Jan 30 14:21:03.438312 kernel: ipmi device interface Jan 30 14:21:03.459313 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 30 14:21:03.459379 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 14:21:03.478667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:21:03.495308 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 14:21:03.515344 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:21:03.530933 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 30 14:21:03.531215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 14:21:03.543899 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 30 14:21:03.559063 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 30 14:21:03.676128 kernel: ipmi_si: IPMI System Interface driver Jan 30 14:21:03.676143 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 30 14:21:03.676227 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 30 14:21:03.676242 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 30 14:21:03.676252 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 30 14:21:03.676341 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 30 14:21:03.676411 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 30 14:21:03.676477 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 30 14:21:03.676489 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 30 14:21:03.676497 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 30 14:21:03.676574 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 30 14:21:03.676674 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 30 14:21:03.676740 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 30 14:21:03.676814 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 30 14:21:03.548309 systemd[1]: Reloading finished in 227 ms. Jan 30 14:21:03.869306 kernel: iTCO_vendor_support: vendor-support=0 Jan 30 14:21:03.905764 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 30 14:21:03.917121 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 30 14:21:03.917307 kernel: intel_rapl_common: Found RAPL domain package Jan 30 14:21:03.938892 kernel: intel_rapl_common: Found RAPL domain core Jan 30 14:21:03.954842 kernel: intel_rapl_common: Found RAPL domain dram Jan 30 14:21:03.975472 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:21:03.992331 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 30 14:21:04.007556 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:21:04.010305 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 30 14:21:04.025226 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:21:04.056510 systemd[1]: Starting ensure-sysext.service... Jan 30 14:21:04.063947 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:21:04.074901 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:21:04.081884 lvm[1598]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:21:04.086268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:21:04.095964 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:21:04.096563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:04.097994 systemd[1]: Reloading requested from client PID 1597 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:21:04.098002 systemd[1]: Reloading... Jan 30 14:21:04.137350 zram_generator::config[1631]: No configuration found. Jan 30 14:21:04.137391 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:21:04.137597 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:21:04.138094 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:21:04.138263 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jan 30 14:21:04.138304 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jan 30 14:21:04.139830 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:21:04.139834 systemd-tmpfiles[1602]: Skipping /boot Jan 30 14:21:04.143928 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:21:04.143932 systemd-tmpfiles[1602]: Skipping /boot Jan 30 14:21:04.193406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:21:04.247061 systemd[1]: Reloading finished in 148 ms. Jan 30 14:21:04.269626 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:21:04.280558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:21:04.291517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:21:04.302533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:04.316656 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:21:04.334458 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:21:04.345291 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:21:04.351289 augenrules[1711]: No rules Jan 30 14:21:04.357068 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:21:04.369112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:21:04.371212 lvm[1716]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:21:04.381652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:21:04.392119 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:21:04.404315 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:21:04.415019 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:21:04.424697 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:21:04.435704 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:21:04.445934 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:21:04.456592 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:21:04.461469 systemd-networkd[1600]: lo: Link UP Jan 30 14:21:04.461472 systemd-networkd[1600]: lo: Gained carrier Jan 30 14:21:04.463893 systemd-networkd[1600]: bond0: netdev ready Jan 30 14:21:04.464790 systemd-networkd[1600]: Enumeration completed Jan 30 14:21:04.467707 systemd-networkd[1600]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:a8.network. Jan 30 14:21:04.468516 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:21:04.483263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:04.483436 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:04.493983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:21:04.504222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:21:04.516038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:21:04.524252 systemd-resolved[1718]: Positive Trust Anchors: Jan 30 14:21:04.524259 systemd-resolved[1718]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:21:04.524283 systemd-resolved[1718]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:21:04.525427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:04.526285 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:21:04.526943 systemd-resolved[1718]: Using system hostname 'ci-4081.3.0-a-b3fea05ed8'. Jan 30 14:21:04.538093 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:21:04.547338 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:21:04.547438 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:04.548570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:21:04.548647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:21:04.559713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:21:04.559787 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:21:04.570710 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:21:04.570807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:21:04.580680 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:21:04.591583 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:21:04.605297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:04.605501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:04.620754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:21:04.636728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:21:04.649340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:21:04.659655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:04.659965 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:21:04.660193 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:04.662631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:21:04.662948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:21:04.678183 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:21:04.678568 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:21:04.698378 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 14:21:04.719699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:21:04.720075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:21:04.733335 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 30 14:21:04.733568 systemd-networkd[1600]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:a9.network. Jan 30 14:21:04.747826 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:04.748115 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:21:04.757637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:21:04.767967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:21:04.777995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:21:04.789012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:21:04.798453 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:21:04.798535 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:21:04.798587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:21:04.799209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:21:04.799298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:21:04.810738 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:21:04.810815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:21:04.821740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:21:04.821832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:21:04.832724 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:21:04.832832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:21:04.849181 systemd[1]: Finished ensure-sysext.service. Jan 30 14:21:04.858963 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:21:04.859016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:21:04.872562 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:21:04.901352 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 14:21:04.923175 systemd-networkd[1600]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 30 14:21:04.923318 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 30 14:21:04.924698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:21:04.925109 systemd-networkd[1600]: enp1s0f0np0: Link UP Jan 30 14:21:04.925480 systemd-networkd[1600]: enp1s0f0np0: Gained carrier Jan 30 14:21:04.943473 systemd[1]: Reached target network.target - Network. Jan 30 14:21:04.945734 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 30 14:21:04.953407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:21:04.964526 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:21:04.968268 systemd-networkd[1600]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:fc:a8.network. Jan 30 14:21:04.968508 systemd-networkd[1600]: enp1s0f1np1: Link UP Jan 30 14:21:04.968746 systemd-networkd[1600]: enp1s0f1np1: Gained carrier Jan 30 14:21:04.975405 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:21:04.981524 systemd-networkd[1600]: bond0: Link UP Jan 30 14:21:04.981784 systemd-networkd[1600]: bond0: Gained carrier Jan 30 14:21:04.981986 systemd-timesyncd[1760]: Network configuration changed, trying to establish connection. Jan 30 14:21:04.985451 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:21:04.996468 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:21:05.007397 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:21:05.018365 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:21:05.018382 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:21:05.026376 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:21:05.044452 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:21:05.047334 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 30 14:21:05.047354 kernel: bond0: active interface up! Jan 30 14:21:05.070429 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:21:05.081377 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:21:05.089578 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:21:05.100098 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:21:05.109933 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:21:05.119701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:21:05.129438 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:21:05.139378 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:21:05.147399 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:21:05.147414 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:21:05.155395 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:21:05.174097 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:21:05.177376 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 30 14:21:05.186927 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:21:05.195979 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:21:05.199497 coreos-metadata[1766]: Jan 30 14:21:05.199 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:21:05.205912 dbus-daemon[1767]: [system] SELinux support is enabled Jan 30 14:21:05.221750 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:21:05.223649 jq[1770]: false Jan 30 14:21:05.231360 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:21:05.232149 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:21:05.241585 extend-filesystems[1772]: Found loop4 Jan 30 14:21:05.241585 extend-filesystems[1772]: Found loop5 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found loop6 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found loop7 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sda Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb1 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb2 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb3 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found usr Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb4 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb6 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb7 Jan 30 14:21:05.248508 extend-filesystems[1772]: Found sdb9 Jan 30 14:21:05.248508 extend-filesystems[1772]: Checking size of /dev/sdb9 Jan 30 14:21:05.248508 extend-filesystems[1772]: Resized partition /dev/sdb9 Jan 30 14:21:05.412384 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jan 30 14:21:05.412404 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1501) Jan 30 14:21:05.242110 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:21:05.412553 extend-filesystems[1786]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:21:05.249283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:21:05.300130 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:21:05.319822 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:21:05.361424 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 30 14:21:05.384851 systemd-logind[1792]: Watching system buttons on /dev/input/event3 (Power Button) Jan 30 14:21:05.434942 update_engine[1797]: I20250130 14:21:05.411893 1797 main.cc:92] Flatcar Update Engine starting Jan 30 14:21:05.434942 update_engine[1797]: I20250130 14:21:05.412595 1797 update_check_scheduler.cc:74] Next update check in 4m41s Jan 30 14:21:05.384862 systemd-logind[1792]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 14:21:05.384872 systemd-logind[1792]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 30 14:21:05.385102 systemd-logind[1792]: New seat seat0. Jan 30 14:21:05.389705 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:21:05.403464 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:21:05.427018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:21:05.445584 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:21:05.456597 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:21:05.457979 jq[1798]: true Jan 30 14:21:05.473571 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:21:05.473681 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:21:05.473869 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:21:05.473974 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:21:05.483816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:21:05.483920 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:21:05.497337 (ntainerd)[1802]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:21:05.498645 jq[1801]: true Jan 30 14:21:05.500503 dbus-daemon[1767]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 14:21:05.501525 tar[1800]: linux-amd64/LICENSE Jan 30 14:21:05.501634 tar[1800]: linux-amd64/helm Jan 30 14:21:05.507257 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 30 14:21:05.507416 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 30 14:21:05.511089 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:21:05.520531 sshd_keygen[1795]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:21:05.521053 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:21:05.521191 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:21:05.533437 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:21:05.533524 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:21:05.551286 bash[1830]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:21:05.558458 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:21:05.570169 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:21:05.580563 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:21:05.586590 locksmithd[1838]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:21:05.602496 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:21:05.611645 systemd[1]: Starting sshkeys.service... Jan 30 14:21:05.618784 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:21:05.618917 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:21:05.630835 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:21:05.642072 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:21:05.654189 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:21:05.665728 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:21:05.675783 containerd[1802]: time="2025-01-30T14:21:05.675730490Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:21:05.677014 coreos-metadata[1859]: Jan 30 14:21:05.676 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 14:21:05.682460 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:21:05.689145 containerd[1802]: time="2025-01-30T14:21:05.689126160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:05.689898 containerd[1802]: time="2025-01-30T14:21:05.689859186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:05.689898 containerd[1802]: time="2025-01-30T14:21:05.689874668Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:21:05.689898 containerd[1802]: time="2025-01-30T14:21:05.689882911Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:21:05.689971 containerd[1802]: time="2025-01-30T14:21:05.689963791Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:21:05.689990 containerd[1802]: time="2025-01-30T14:21:05.689973956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690020 containerd[1802]: time="2025-01-30T14:21:05.690011679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690038 containerd[1802]: time="2025-01-30T14:21:05.690021214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690120 containerd[1802]: time="2025-01-30T14:21:05.690110826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690135 containerd[1802]: time="2025-01-30T14:21:05.690120354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690135 containerd[1802]: time="2025-01-30T14:21:05.690127857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690135 containerd[1802]: time="2025-01-30T14:21:05.690133240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690183 containerd[1802]: time="2025-01-30T14:21:05.690174451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690296 containerd[1802]: time="2025-01-30T14:21:05.690288956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690375 containerd[1802]: time="2025-01-30T14:21:05.690351495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:21:05.690375 containerd[1802]: time="2025-01-30T14:21:05.690360367Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:21:05.690411 containerd[1802]: time="2025-01-30T14:21:05.690403788Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:21:05.690439 containerd[1802]: time="2025-01-30T14:21:05.690432566Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:21:05.692102 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 30 14:21:05.701551 containerd[1802]: time="2025-01-30T14:21:05.701509583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:21:05.701551 containerd[1802]: time="2025-01-30T14:21:05.701537125Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:21:05.701551 containerd[1802]: time="2025-01-30T14:21:05.701548450Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:21:05.701616 containerd[1802]: time="2025-01-30T14:21:05.701557672Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:21:05.701616 containerd[1802]: time="2025-01-30T14:21:05.701565652Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:21:05.701643 containerd[1802]: time="2025-01-30T14:21:05.701635279Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:21:05.701791 containerd[1802]: time="2025-01-30T14:21:05.701757663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:21:05.701819 containerd[1802]: time="2025-01-30T14:21:05.701810858Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:21:05.701841 containerd[1802]: time="2025-01-30T14:21:05.701822450Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:21:05.701841 containerd[1802]: time="2025-01-30T14:21:05.701829598Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:21:05.701841 containerd[1802]: time="2025-01-30T14:21:05.701837196Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701884 containerd[1802]: time="2025-01-30T14:21:05.701844809Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701884 containerd[1802]: time="2025-01-30T14:21:05.701851698Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701884 containerd[1802]: time="2025-01-30T14:21:05.701862566Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701884 containerd[1802]: time="2025-01-30T14:21:05.701870751Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701884 containerd[1802]: time="2025-01-30T14:21:05.701877901Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701884566Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701891725Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701902470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701911204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701918402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701925734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701932886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701942421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.701952 containerd[1802]: time="2025-01-30T14:21:05.701949373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.701956498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.701964348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.701972258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.701978491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.701985786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.701993380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.702001702Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.702013115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.702020544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.702026559Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.702050703Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.702060594Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:21:05.702075 containerd[1802]: time="2025-01-30T14:21:05.702066726Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:21:05.702251 containerd[1802]: time="2025-01-30T14:21:05.702073139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:21:05.702251 containerd[1802]: time="2025-01-30T14:21:05.702078652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702251 containerd[1802]: time="2025-01-30T14:21:05.702085261Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:21:05.702251 containerd[1802]: time="2025-01-30T14:21:05.702090993Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:21:05.702251 containerd[1802]: time="2025-01-30T14:21:05.702096549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:21:05.702328 containerd[1802]: time="2025-01-30T14:21:05.702246048Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:21:05.702328 containerd[1802]: time="2025-01-30T14:21:05.702278988Z" level=info msg="Connect containerd service" Jan 30 14:21:05.702328 containerd[1802]: time="2025-01-30T14:21:05.702306266Z" level=info msg="using legacy CRI server" Jan 30 14:21:05.702328 containerd[1802]: time="2025-01-30T14:21:05.702311587Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:21:05.702453 containerd[1802]: time="2025-01-30T14:21:05.702362151Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:21:05.702530 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:21:05.702685 containerd[1802]: time="2025-01-30T14:21:05.702645756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:21:05.702779 containerd[1802]: time="2025-01-30T14:21:05.702738217Z" level=info msg="Start subscribing containerd event" Jan 30 14:21:05.702779 containerd[1802]: time="2025-01-30T14:21:05.702766151Z" level=info msg="Start recovering state" Jan 30 14:21:05.702828 containerd[1802]: time="2025-01-30T14:21:05.702801698Z" level=info msg="Start event monitor" Jan 30 14:21:05.702828 containerd[1802]: time="2025-01-30T14:21:05.702804580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:21:05.702828 containerd[1802]: time="2025-01-30T14:21:05.702811335Z" level=info msg="Start snapshots syncer" Jan 30 14:21:05.702828 containerd[1802]: time="2025-01-30T14:21:05.702821614Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:21:05.702828 containerd[1802]: time="2025-01-30T14:21:05.702826121Z" level=info msg="Start streaming server" Jan 30 14:21:05.702895 containerd[1802]: time="2025-01-30T14:21:05.702839328Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:21:05.702895 containerd[1802]: time="2025-01-30T14:21:05.702876291Z" level=info msg="containerd successfully booted in 0.027611s" Jan 30 14:21:05.711678 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:21:05.803314 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jan 30 14:21:05.827760 tar[1800]: linux-amd64/README.md Jan 30 14:21:05.828381 extend-filesystems[1786]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jan 30 14:21:05.828381 extend-filesystems[1786]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 30 14:21:05.828381 extend-filesystems[1786]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jan 30 14:21:05.859392 extend-filesystems[1772]: Resized filesystem in /dev/sdb9 Jan 30 14:21:05.842685 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:21:05.842779 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:21:05.878716 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:21:06.578460 systemd-networkd[1600]: bond0: Gained IPv6LL Jan 30 14:21:06.963594 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:21:06.975059 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:21:06.995605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:07.008812 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:21:07.027849 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:21:07.665495 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 30 14:21:07.665635 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 30 14:21:07.742835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:07.753962 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:08.199184 kubelet[1901]: E0130 14:21:08.199092 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:08.200086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:08.200163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:08.553154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:21:08.575664 systemd[1]: Started sshd@0-139.178.70.237:22-147.75.109.163:59794.service - OpenSSH per-connection server daemon (147.75.109.163:59794). Jan 30 14:21:08.613072 sshd[1919]: Accepted publickey for core from 147.75.109.163 port 59794 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:08.614531 sshd[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:08.620343 systemd-logind[1792]: New session 1 of user core. Jan 30 14:21:08.621217 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:21:08.645768 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:21:08.658228 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:21:08.672065 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:21:08.684333 (systemd)[1923]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:21:08.710162 coreos-metadata[1859]: Jan 30 14:21:08.710 INFO Fetch successful Jan 30 14:21:08.747858 unknown[1859]: wrote ssh authorized keys file for user: core Jan 30 14:21:08.767540 update-ssh-keys[1929]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:21:08.767798 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:21:08.772331 systemd[1923]: Queued start job for default target default.target. Jan 30 14:21:08.772868 systemd[1923]: Created slice app.slice - User Application Slice. Jan 30 14:21:08.772881 systemd[1923]: Reached target paths.target - Paths. Jan 30 14:21:08.772890 systemd[1923]: Reached target timers.target - Timers. Jan 30 14:21:08.773548 systemd[1923]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:21:08.779117 systemd[1]: Finished sshkeys.service. Jan 30 14:21:08.779196 systemd[1923]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:21:08.779224 systemd[1923]: Reached target sockets.target - Sockets. Jan 30 14:21:08.779233 systemd[1923]: Reached target basic.target - Basic System. Jan 30 14:21:08.779255 systemd[1923]: Reached target default.target - Main User Target. Jan 30 14:21:08.779270 systemd[1923]: Startup finished in 90ms. Jan 30 14:21:08.786712 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:21:08.801582 coreos-metadata[1766]: Jan 30 14:21:08.801 INFO Fetch successful Jan 30 14:21:08.809523 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:21:08.844511 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:21:08.868623 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 30 14:21:08.881925 systemd[1]: Started sshd@1-139.178.70.237:22-147.75.109.163:47076.service - OpenSSH per-connection server daemon (147.75.109.163:47076). Jan 30 14:21:08.926136 sshd[1945]: Accepted publickey for core from 147.75.109.163 port 47076 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:08.926790 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:08.929138 systemd-logind[1792]: New session 2 of user core. Jan 30 14:21:08.953557 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:21:09.017888 sshd[1945]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:09.043281 systemd[1]: sshd@1-139.178.70.237:22-147.75.109.163:47076.service: Deactivated successfully. Jan 30 14:21:09.044184 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:21:09.044851 systemd-logind[1792]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:21:09.045496 systemd[1]: Started sshd@2-139.178.70.237:22-147.75.109.163:47078.service - OpenSSH per-connection server daemon (147.75.109.163:47078). Jan 30 14:21:09.057026 systemd-logind[1792]: Removed session 2. Jan 30 14:21:09.095879 sshd[1952]: Accepted publickey for core from 147.75.109.163 port 47078 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:09.096906 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:09.100739 systemd-logind[1792]: New session 3 of user core. Jan 30 14:21:09.116832 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:21:09.197617 sshd[1952]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:09.203920 systemd[1]: sshd@2-139.178.70.237:22-147.75.109.163:47078.service: Deactivated successfully. Jan 30 14:21:09.207770 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:21:09.209465 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 30 14:21:09.224475 systemd-logind[1792]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:21:09.226423 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:21:09.237997 systemd[1]: Startup finished in 2.659s (kernel) + 22.773s (initrd) + 9.622s (userspace) = 35.055s. Jan 30 14:21:09.239325 systemd-logind[1792]: Removed session 3. Jan 30 14:21:09.257478 login[1876]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:21:09.260995 systemd-logind[1792]: New session 4 of user core. Jan 30 14:21:09.279562 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:21:09.287086 login[1872]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:21:09.289833 systemd-logind[1792]: New session 5 of user core. Jan 30 14:21:09.290496 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:21:10.247211 systemd-timesyncd[1760]: Contacted time server 66.118.231.14:123 (0.flatcar.pool.ntp.org). Jan 30 14:21:10.247426 systemd-timesyncd[1760]: Initial clock synchronization to Thu 2025-01-30 14:21:10.318750 UTC. Jan 30 14:21:18.205799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:21:18.220560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:18.481179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:18.483393 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:18.514977 kubelet[1991]: E0130 14:21:18.514919 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:18.518244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:18.518380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:19.271629 systemd[1]: Started sshd@3-139.178.70.237:22-147.75.109.163:54864.service - OpenSSH per-connection server daemon (147.75.109.163:54864). Jan 30 14:21:19.303430 sshd[2011]: Accepted publickey for core from 147.75.109.163 port 54864 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:19.304085 sshd[2011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:19.306633 systemd-logind[1792]: New session 6 of user core. Jan 30 14:21:19.321584 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:21:19.373586 sshd[2011]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:19.389223 systemd[1]: sshd@3-139.178.70.237:22-147.75.109.163:54864.service: Deactivated successfully. Jan 30 14:21:19.390155 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:21:19.390845 systemd-logind[1792]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:21:19.391475 systemd[1]: Started sshd@4-139.178.70.237:22-147.75.109.163:54866.service - OpenSSH per-connection server daemon (147.75.109.163:54866). Jan 30 14:21:19.392008 systemd-logind[1792]: Removed session 6. Jan 30 14:21:19.424839 sshd[2018]: Accepted publickey for core from 147.75.109.163 port 54866 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:19.425580 sshd[2018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:19.428570 systemd-logind[1792]: New session 7 of user core. Jan 30 14:21:19.446547 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:21:19.496620 sshd[2018]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:19.511027 systemd[1]: sshd@4-139.178.70.237:22-147.75.109.163:54866.service: Deactivated successfully. Jan 30 14:21:19.514608 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:21:19.517902 systemd-logind[1792]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:21:19.532064 systemd[1]: Started sshd@5-139.178.70.237:22-147.75.109.163:54868.service - OpenSSH per-connection server daemon (147.75.109.163:54868). Jan 30 14:21:19.534875 systemd-logind[1792]: Removed session 7. Jan 30 14:21:19.580544 sshd[2026]: Accepted publickey for core from 147.75.109.163 port 54868 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:19.581149 sshd[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:19.583619 systemd-logind[1792]: New session 8 of user core. Jan 30 14:21:19.599605 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:21:19.650688 sshd[2026]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:19.666257 systemd[1]: sshd@5-139.178.70.237:22-147.75.109.163:54868.service: Deactivated successfully. Jan 30 14:21:19.669493 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:21:19.672619 systemd-logind[1792]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:21:19.687094 systemd[1]: Started sshd@6-139.178.70.237:22-147.75.109.163:54882.service - OpenSSH per-connection server daemon (147.75.109.163:54882). Jan 30 14:21:19.690071 systemd-logind[1792]: Removed session 8. Jan 30 14:21:19.805740 sshd[2034]: Accepted publickey for core from 147.75.109.163 port 54882 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:19.807232 sshd[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:19.812335 systemd-logind[1792]: New session 9 of user core. Jan 30 14:21:19.827678 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:21:19.895827 sudo[2037]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:21:19.895973 sudo[2037]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:21:19.918288 sudo[2037]: pam_unix(sudo:session): session closed for user root Jan 30 14:21:19.919589 sshd[2034]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:19.935783 systemd[1]: sshd@6-139.178.70.237:22-147.75.109.163:54882.service: Deactivated successfully. Jan 30 14:21:19.937050 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:21:19.938225 systemd-logind[1792]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:21:19.939581 systemd[1]: Started sshd@7-139.178.70.237:22-147.75.109.163:54888.service - OpenSSH per-connection server daemon (147.75.109.163:54888). Jan 30 14:21:19.940518 systemd-logind[1792]: Removed session 9. Jan 30 14:21:19.982932 sshd[2042]: Accepted publickey for core from 147.75.109.163 port 54888 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:19.983585 sshd[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:19.985999 systemd-logind[1792]: New session 10 of user core. Jan 30 14:21:19.996601 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:21:20.046319 sudo[2046]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:21:20.046604 sudo[2046]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:21:20.049864 sudo[2046]: pam_unix(sudo:session): session closed for user root Jan 30 14:21:20.055111 sudo[2045]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:21:20.055431 sudo[2045]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:21:20.079607 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:21:20.080644 auditctl[2049]: No rules Jan 30 14:21:20.080853 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:21:20.080983 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:21:20.082374 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:21:20.103183 augenrules[2067]: No rules Jan 30 14:21:20.103852 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:21:20.104769 sudo[2045]: pam_unix(sudo:session): session closed for user root Jan 30 14:21:20.106264 sshd[2042]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:20.132432 systemd[1]: sshd@7-139.178.70.237:22-147.75.109.163:54888.service: Deactivated successfully. Jan 30 14:21:20.133110 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:21:20.133736 systemd-logind[1792]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:21:20.134393 systemd[1]: Started sshd@8-139.178.70.237:22-147.75.109.163:54896.service - OpenSSH per-connection server daemon (147.75.109.163:54896). Jan 30 14:21:20.134883 systemd-logind[1792]: Removed session 10. Jan 30 14:21:20.167145 sshd[2075]: Accepted publickey for core from 147.75.109.163 port 54896 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:21:20.167851 sshd[2075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:21:20.170392 systemd-logind[1792]: New session 11 of user core. Jan 30 14:21:20.189567 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:21:20.251924 sudo[2078]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:21:20.252848 sudo[2078]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:21:20.613646 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:21:20.613701 (dockerd)[2104]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:21:20.863919 dockerd[2104]: time="2025-01-30T14:21:20.863828624Z" level=info msg="Starting up" Jan 30 14:21:20.931020 dockerd[2104]: time="2025-01-30T14:21:20.930994336Z" level=info msg="Loading containers: start." Jan 30 14:21:21.024348 kernel: Initializing XFRM netlink socket Jan 30 14:21:21.075353 systemd-networkd[1600]: docker0: Link UP Jan 30 14:21:21.091287 dockerd[2104]: time="2025-01-30T14:21:21.091239872Z" level=info msg="Loading containers: done." Jan 30 14:21:21.100850 dockerd[2104]: time="2025-01-30T14:21:21.100801842Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:21:21.100926 dockerd[2104]: time="2025-01-30T14:21:21.100854323Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:21:21.100926 dockerd[2104]: time="2025-01-30T14:21:21.100906231Z" level=info msg="Daemon has completed initialization" Jan 30 14:21:21.100912 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck328635570-merged.mount: Deactivated successfully. Jan 30 14:21:21.116289 dockerd[2104]: time="2025-01-30T14:21:21.116192188Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:21:21.116317 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:21:21.784052 containerd[1802]: time="2025-01-30T14:21:21.784000061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 14:21:22.415178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286443639.mount: Deactivated successfully. Jan 30 14:21:23.188112 containerd[1802]: time="2025-01-30T14:21:23.188085424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:23.188327 containerd[1802]: time="2025-01-30T14:21:23.188275595Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 14:21:23.188695 containerd[1802]: time="2025-01-30T14:21:23.188655555Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:23.190230 containerd[1802]: time="2025-01-30T14:21:23.190185992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:23.190857 containerd[1802]: time="2025-01-30T14:21:23.190815376Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.406793864s" Jan 30 14:21:23.190857 containerd[1802]: time="2025-01-30T14:21:23.190834313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 14:21:23.191163 containerd[1802]: time="2025-01-30T14:21:23.191150499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 14:21:24.245075 containerd[1802]: time="2025-01-30T14:21:24.245051262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:24.245377 containerd[1802]: time="2025-01-30T14:21:24.245250811Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 14:21:24.245740 containerd[1802]: time="2025-01-30T14:21:24.245728231Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:24.247240 containerd[1802]: time="2025-01-30T14:21:24.247229106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:24.247843 containerd[1802]: time="2025-01-30T14:21:24.247829629Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.05666067s" Jan 30 14:21:24.247868 containerd[1802]: time="2025-01-30T14:21:24.247846944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 14:21:24.248110 containerd[1802]: time="2025-01-30T14:21:24.248098911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 14:21:25.220403 containerd[1802]: time="2025-01-30T14:21:25.220348878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:25.220632 containerd[1802]: time="2025-01-30T14:21:25.220579512Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 14:21:25.220956 containerd[1802]: time="2025-01-30T14:21:25.220910309Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:25.222489 containerd[1802]: time="2025-01-30T14:21:25.222449437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:25.223084 containerd[1802]: time="2025-01-30T14:21:25.223034825Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 974.920176ms" Jan 30 14:21:25.223084 containerd[1802]: time="2025-01-30T14:21:25.223051384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 14:21:25.223296 containerd[1802]: time="2025-01-30T14:21:25.223284282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 14:21:26.132912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208987961.mount: Deactivated successfully. Jan 30 14:21:26.323077 containerd[1802]: time="2025-01-30T14:21:26.323053299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:26.323281 containerd[1802]: time="2025-01-30T14:21:26.323189858Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 14:21:26.323663 containerd[1802]: time="2025-01-30T14:21:26.323625853Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:26.324588 containerd[1802]: time="2025-01-30T14:21:26.324550403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:26.324968 containerd[1802]: time="2025-01-30T14:21:26.324931988Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.101630679s" Jan 30 14:21:26.324968 containerd[1802]: time="2025-01-30T14:21:26.324948078Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 14:21:26.325229 containerd[1802]: time="2025-01-30T14:21:26.325216206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 14:21:26.823983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723800921.mount: Deactivated successfully. Jan 30 14:21:27.371488 containerd[1802]: time="2025-01-30T14:21:27.371461551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:27.371693 containerd[1802]: time="2025-01-30T14:21:27.371655296Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 14:21:27.372104 containerd[1802]: time="2025-01-30T14:21:27.372094110Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:27.373956 containerd[1802]: time="2025-01-30T14:21:27.373914323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:27.374528 containerd[1802]: time="2025-01-30T14:21:27.374486857Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.049253583s" Jan 30 14:21:27.374528 containerd[1802]: time="2025-01-30T14:21:27.374502390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 14:21:27.374816 containerd[1802]: time="2025-01-30T14:21:27.374792253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 14:21:27.909090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790359901.mount: Deactivated successfully. Jan 30 14:21:27.910407 containerd[1802]: time="2025-01-30T14:21:27.910314496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:27.910603 containerd[1802]: time="2025-01-30T14:21:27.910550749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 14:21:27.911497 containerd[1802]: time="2025-01-30T14:21:27.911446928Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:27.912797 containerd[1802]: time="2025-01-30T14:21:27.912784482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:27.913393 containerd[1802]: time="2025-01-30T14:21:27.913381114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 538.574799ms" Jan 30 14:21:27.913421 containerd[1802]: time="2025-01-30T14:21:27.913396231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 14:21:27.913818 containerd[1802]: time="2025-01-30T14:21:27.913756920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 14:21:28.470670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331387988.mount: Deactivated successfully. Jan 30 14:21:28.704285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:21:28.715619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:28.960211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:28.962440 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:21:28.981891 kubelet[2455]: E0130 14:21:28.981869 2455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:21:28.983359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:21:28.983461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:21:29.508859 containerd[1802]: time="2025-01-30T14:21:29.508827502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:29.509077 containerd[1802]: time="2025-01-30T14:21:29.508992376Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 14:21:29.509486 containerd[1802]: time="2025-01-30T14:21:29.509475771Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:29.511516 containerd[1802]: time="2025-01-30T14:21:29.511501914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:29.512227 containerd[1802]: time="2025-01-30T14:21:29.512208420Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.598409253s" Jan 30 14:21:29.512248 containerd[1802]: time="2025-01-30T14:21:29.512233450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 14:21:30.728659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:30.752647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:30.767463 systemd[1]: Reloading requested from client PID 2527 ('systemctl') (unit session-11.scope)... Jan 30 14:21:30.767470 systemd[1]: Reloading... Jan 30 14:21:30.810358 zram_generator::config[2566]: No configuration found. Jan 30 14:21:30.877715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:21:30.937356 systemd[1]: Reloading finished in 169 ms. Jan 30 14:21:30.988073 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:21:30.988127 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:21:30.988271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:31.001670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:31.282009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:31.284408 (kubelet)[2629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:21:31.307213 kubelet[2629]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:21:31.307213 kubelet[2629]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 14:21:31.307213 kubelet[2629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:21:31.307436 kubelet[2629]: I0130 14:21:31.307215 2629 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:21:31.510218 kubelet[2629]: I0130 14:21:31.510176 2629 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 14:21:31.510218 kubelet[2629]: I0130 14:21:31.510187 2629 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:21:31.510378 kubelet[2629]: I0130 14:21:31.510310 2629 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 14:21:31.527390 kubelet[2629]: E0130 14:21:31.527312 2629 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.237:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.237:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:21:31.528016 kubelet[2629]: I0130 14:21:31.527978 2629 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:21:31.536249 kubelet[2629]: E0130 14:21:31.536186 2629 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:21:31.536249 kubelet[2629]: I0130 14:21:31.536205 2629 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:21:31.545699 kubelet[2629]: I0130 14:21:31.545666 2629 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:21:31.546827 kubelet[2629]: I0130 14:21:31.546816 2629 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:21:31.546946 kubelet[2629]: I0130 14:21:31.546829 2629 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-b3fea05ed8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:21:31.546946 kubelet[2629]: I0130 14:21:31.546920 2629 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:21:31.546946 kubelet[2629]: I0130 14:21:31.546925 2629 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 14:21:31.547044 kubelet[2629]: I0130 14:21:31.546990 2629 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:21:31.550184 kubelet[2629]: I0130 14:21:31.550147 2629 kubelet.go:446] "Attempting to sync node with API server" Jan 30 14:21:31.550184 kubelet[2629]: I0130 14:21:31.550155 2629 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:21:31.550184 kubelet[2629]: I0130 14:21:31.550164 2629 kubelet.go:352] "Adding apiserver pod source" Jan 30 14:21:31.550184 kubelet[2629]: I0130 14:21:31.550169 2629 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:21:31.552765 kubelet[2629]: I0130 14:21:31.552714 2629 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:21:31.553138 kubelet[2629]: I0130 14:21:31.553095 2629 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:21:31.553908 kubelet[2629]: W0130 14:21:31.553871 2629 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:21:31.555960 kubelet[2629]: I0130 14:21:31.555947 2629 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 14:21:31.556002 kubelet[2629]: I0130 14:21:31.555971 2629 server.go:1287] "Started kubelet" Jan 30 14:21:31.556083 kubelet[2629]: I0130 14:21:31.556062 2629 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:21:31.556327 kubelet[2629]: W0130 14:21:31.556288 2629 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b3fea05ed8&limit=500&resourceVersion=0": dial tcp 139.178.70.237:6443: connect: connection refused Jan 30 14:21:31.556371 kubelet[2629]: E0130 14:21:31.556347 2629 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b3fea05ed8&limit=500&resourceVersion=0\": dial tcp 139.178.70.237:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:21:31.556538 kubelet[2629]: W0130 14:21:31.556505 2629 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.237:6443: connect: connection refused Jan 30 14:21:31.556620 kubelet[2629]: E0130 14:21:31.556607 2629 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.237:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:21:31.557481 kubelet[2629]: I0130 14:21:31.557423 2629 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:21:31.557625 kubelet[2629]: I0130 14:21:31.557618 2629 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:21:31.559431 kubelet[2629]: E0130 14:21:31.559419 2629 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:21:31.559579 kubelet[2629]: I0130 14:21:31.559569 2629 server.go:490] "Adding debug handlers to kubelet server" Jan 30 14:21:31.559879 kubelet[2629]: I0130 14:21:31.559870 2629 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:21:31.559951 kubelet[2629]: I0130 14:21:31.559939 2629 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:21:31.559987 kubelet[2629]: E0130 14:21:31.559953 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:31.559987 kubelet[2629]: I0130 14:21:31.559964 2629 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 14:21:31.560036 kubelet[2629]: I0130 14:21:31.560023 2629 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:21:31.560067 kubelet[2629]: I0130 14:21:31.560059 2629 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:21:31.560128 kubelet[2629]: E0130 14:21:31.560109 2629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b3fea05ed8?timeout=10s\": dial tcp 139.178.70.237:6443: connect: connection refused" interval="200ms" Jan 30 14:21:31.560166 kubelet[2629]: W0130 14:21:31.560149 2629 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.237:6443: connect: connection refused Jan 30 14:21:31.560186 kubelet[2629]: E0130 14:21:31.560173 2629 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.237:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:21:31.560254 kubelet[2629]: I0130 14:21:31.560248 2629 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:21:31.560294 kubelet[2629]: I0130 14:21:31.560284 2629 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:21:31.560698 kubelet[2629]: I0130 14:21:31.560689 2629 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:21:31.560922 kubelet[2629]: E0130 14:21:31.559961 2629 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.237:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.237:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-b3fea05ed8.181f7e56971eab09 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b3fea05ed8,UID:ci-4081.3.0-a-b3fea05ed8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b3fea05ed8,},FirstTimestamp:2025-01-30 14:21:31.555957513 +0000 UTC m=+0.269594559,LastTimestamp:2025-01-30 14:21:31.555957513 +0000 UTC m=+0.269594559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b3fea05ed8,}" Jan 30 14:21:31.568535 kubelet[2629]: I0130 14:21:31.568515 2629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:21:31.569083 kubelet[2629]: I0130 14:21:31.569070 2629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:21:31.569118 kubelet[2629]: I0130 14:21:31.569085 2629 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 14:21:31.569118 kubelet[2629]: I0130 14:21:31.569097 2629 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 14:21:31.569118 kubelet[2629]: I0130 14:21:31.569101 2629 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 14:21:31.569173 kubelet[2629]: E0130 14:21:31.569127 2629 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:21:31.569525 kubelet[2629]: W0130 14:21:31.569442 2629 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.237:6443: connect: connection refused Jan 30 14:21:31.569525 kubelet[2629]: E0130 14:21:31.569481 2629 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.237:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:21:31.594194 kubelet[2629]: I0130 14:21:31.594156 2629 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 14:21:31.594194 kubelet[2629]: I0130 14:21:31.594191 2629 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 14:21:31.594269 kubelet[2629]: I0130 14:21:31.594205 2629 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:21:31.595151 kubelet[2629]: I0130 14:21:31.595142 2629 policy_none.go:49] "None policy: Start" Jan 30 14:21:31.595151 kubelet[2629]: I0130 14:21:31.595152 2629 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 14:21:31.595233 kubelet[2629]: I0130 14:21:31.595162 2629 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:21:31.598400 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:21:31.621103 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:21:31.623066 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:21:31.635915 kubelet[2629]: I0130 14:21:31.635877 2629 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:21:31.636011 kubelet[2629]: I0130 14:21:31.635976 2629 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:21:31.636011 kubelet[2629]: I0130 14:21:31.635983 2629 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:21:31.636080 kubelet[2629]: I0130 14:21:31.636069 2629 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:21:31.636382 kubelet[2629]: E0130 14:21:31.636373 2629 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 14:21:31.636418 kubelet[2629]: E0130 14:21:31.636398 2629 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:31.678085 systemd[1]: Created slice kubepods-burstable-podfdbe53db0e978e8a91992c9420885a56.slice - libcontainer container kubepods-burstable-podfdbe53db0e978e8a91992c9420885a56.slice. Jan 30 14:21:31.711531 kubelet[2629]: E0130 14:21:31.711396 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.722423 systemd[1]: Created slice kubepods-burstable-pod6886263005d75c95ea8c57aa01eb6bf4.slice - libcontainer container kubepods-burstable-pod6886263005d75c95ea8c57aa01eb6bf4.slice. Jan 30 14:21:31.726937 kubelet[2629]: E0130 14:21:31.726838 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.731956 systemd[1]: Created slice kubepods-burstable-podc378a3de1c8a29d489942677eeb8c426.slice - libcontainer container kubepods-burstable-podc378a3de1c8a29d489942677eeb8c426.slice. Jan 30 14:21:31.736175 kubelet[2629]: E0130 14:21:31.736096 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.740474 kubelet[2629]: I0130 14:21:31.740384 2629 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.741263 kubelet[2629]: E0130 14:21:31.741155 2629 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.237:6443/api/v1/nodes\": dial tcp 139.178.70.237:6443: connect: connection refused" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.754250 kubelet[2629]: E0130 14:21:31.753997 2629 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.237:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.237:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-b3fea05ed8.181f7e56971eab09 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-b3fea05ed8,UID:ci-4081.3.0-a-b3fea05ed8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-b3fea05ed8,},FirstTimestamp:2025-01-30 14:21:31.555957513 +0000 UTC m=+0.269594559,LastTimestamp:2025-01-30 14:21:31.555957513 +0000 UTC m=+0.269594559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-b3fea05ed8,}" Jan 30 14:21:31.761070 kubelet[2629]: E0130 14:21:31.760958 2629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b3fea05ed8?timeout=10s\": dial tcp 139.178.70.237:6443: connect: connection refused" interval="400ms" Jan 30 14:21:31.761263 kubelet[2629]: I0130 14:21:31.761192 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761404 kubelet[2629]: I0130 14:21:31.761264 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761404 kubelet[2629]: I0130 14:21:31.761343 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdbe53db0e978e8a91992c9420885a56-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" (UID: \"fdbe53db0e978e8a91992c9420885a56\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761404 kubelet[2629]: I0130 14:21:31.761397 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761723 kubelet[2629]: I0130 14:21:31.761445 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761723 kubelet[2629]: I0130 14:21:31.761494 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761723 kubelet[2629]: I0130 14:21:31.761541 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6886263005d75c95ea8c57aa01eb6bf4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b3fea05ed8\" (UID: \"6886263005d75c95ea8c57aa01eb6bf4\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761723 kubelet[2629]: I0130 14:21:31.761587 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdbe53db0e978e8a91992c9420885a56-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" (UID: \"fdbe53db0e978e8a91992c9420885a56\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.761723 kubelet[2629]: I0130 14:21:31.761636 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdbe53db0e978e8a91992c9420885a56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" (UID: \"fdbe53db0e978e8a91992c9420885a56\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.946514 kubelet[2629]: I0130 14:21:31.946325 2629 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:31.947063 kubelet[2629]: E0130 14:21:31.946961 2629 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.237:6443/api/v1/nodes\": dial tcp 139.178.70.237:6443: connect: connection refused" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:32.013693 containerd[1802]: time="2025-01-30T14:21:32.013630850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b3fea05ed8,Uid:fdbe53db0e978e8a91992c9420885a56,Namespace:kube-system,Attempt:0,}" Jan 30 14:21:32.027979 containerd[1802]: time="2025-01-30T14:21:32.027935004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b3fea05ed8,Uid:6886263005d75c95ea8c57aa01eb6bf4,Namespace:kube-system,Attempt:0,}" Jan 30 14:21:32.037555 containerd[1802]: time="2025-01-30T14:21:32.037473720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b3fea05ed8,Uid:c378a3de1c8a29d489942677eeb8c426,Namespace:kube-system,Attempt:0,}" Jan 30 14:21:32.161527 kubelet[2629]: E0130 14:21:32.161497 2629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-b3fea05ed8?timeout=10s\": dial tcp 139.178.70.237:6443: connect: connection refused" interval="800ms" Jan 30 14:21:32.348899 kubelet[2629]: I0130 14:21:32.348823 2629 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:32.349150 kubelet[2629]: E0130 14:21:32.349104 2629 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.237:6443/api/v1/nodes\": dial tcp 139.178.70.237:6443: connect: connection refused" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:32.501481 kubelet[2629]: W0130 14:21:32.501410 2629 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.237:6443: connect: connection refused Jan 30 14:21:32.501481 kubelet[2629]: E0130 14:21:32.501458 2629 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.237:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:21:32.517992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343577274.mount: Deactivated successfully. Jan 30 14:21:32.519860 containerd[1802]: time="2025-01-30T14:21:32.519843725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:21:32.520188 containerd[1802]: time="2025-01-30T14:21:32.520169109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 14:21:32.520439 containerd[1802]: time="2025-01-30T14:21:32.520425973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:21:32.520844 containerd[1802]: time="2025-01-30T14:21:32.520834499Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:21:32.521195 containerd[1802]: time="2025-01-30T14:21:32.521184994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:21:32.521226 containerd[1802]: time="2025-01-30T14:21:32.521202965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:21:32.521474 containerd[1802]: time="2025-01-30T14:21:32.521456185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:21:32.522589 containerd[1802]: time="2025-01-30T14:21:32.522576785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:21:32.523975 containerd[1802]: time="2025-01-30T14:21:32.523962232Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.264624ms" Jan 30 14:21:32.524830 containerd[1802]: time="2025-01-30T14:21:32.524782087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.811531ms" Jan 30 14:21:32.526004 containerd[1802]: time="2025-01-30T14:21:32.525968325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.365421ms" Jan 30 14:21:32.611003 containerd[1802]: time="2025-01-30T14:21:32.610875812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:21:32.611003 containerd[1802]: time="2025-01-30T14:21:32.610905853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:21:32.611003 containerd[1802]: time="2025-01-30T14:21:32.610913156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:32.611003 containerd[1802]: time="2025-01-30T14:21:32.610959906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:32.611126 containerd[1802]: time="2025-01-30T14:21:32.611062093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:21:32.611126 containerd[1802]: time="2025-01-30T14:21:32.611061614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:21:32.611126 containerd[1802]: time="2025-01-30T14:21:32.611086071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:21:32.611126 containerd[1802]: time="2025-01-30T14:21:32.611090242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:21:32.611126 containerd[1802]: time="2025-01-30T14:21:32.611093025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:32.611126 containerd[1802]: time="2025-01-30T14:21:32.611101886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:32.611248 containerd[1802]: time="2025-01-30T14:21:32.611134465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:32.611248 containerd[1802]: time="2025-01-30T14:21:32.611149981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:32.632625 systemd[1]: Started cri-containerd-7888208b02aaf99e125221aaee9d816b7828d406d497594259cdcb543a798623.scope - libcontainer container 7888208b02aaf99e125221aaee9d816b7828d406d497594259cdcb543a798623. Jan 30 14:21:32.633433 systemd[1]: Started cri-containerd-ac0a7725010262b618a830ab957dc40fb04c085f71baf7eb2fa0e8995b3556a7.scope - libcontainer container ac0a7725010262b618a830ab957dc40fb04c085f71baf7eb2fa0e8995b3556a7. Jan 30 14:21:32.634249 systemd[1]: Started cri-containerd-cf196d2c11b2de841faae18149d51648d07277314ea1a464a6cd020a953320e3.scope - libcontainer container cf196d2c11b2de841faae18149d51648d07277314ea1a464a6cd020a953320e3. Jan 30 14:21:32.656451 containerd[1802]: time="2025-01-30T14:21:32.656422090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-b3fea05ed8,Uid:6886263005d75c95ea8c57aa01eb6bf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7888208b02aaf99e125221aaee9d816b7828d406d497594259cdcb543a798623\"" Jan 30 14:21:32.656618 containerd[1802]: time="2025-01-30T14:21:32.656603964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-b3fea05ed8,Uid:fdbe53db0e978e8a91992c9420885a56,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac0a7725010262b618a830ab957dc40fb04c085f71baf7eb2fa0e8995b3556a7\"" Jan 30 14:21:32.657307 containerd[1802]: time="2025-01-30T14:21:32.657287118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-b3fea05ed8,Uid:c378a3de1c8a29d489942677eeb8c426,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf196d2c11b2de841faae18149d51648d07277314ea1a464a6cd020a953320e3\"" Jan 30 14:21:32.658063 containerd[1802]: time="2025-01-30T14:21:32.658050397Z" level=info msg="CreateContainer within sandbox \"ac0a7725010262b618a830ab957dc40fb04c085f71baf7eb2fa0e8995b3556a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:21:32.658091 containerd[1802]: time="2025-01-30T14:21:32.658079281Z" level=info msg="CreateContainer within sandbox \"cf196d2c11b2de841faae18149d51648d07277314ea1a464a6cd020a953320e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:21:32.658111 containerd[1802]: time="2025-01-30T14:21:32.658052893Z" level=info msg="CreateContainer within sandbox \"7888208b02aaf99e125221aaee9d816b7828d406d497594259cdcb543a798623\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:21:32.664217 containerd[1802]: time="2025-01-30T14:21:32.664175085Z" level=info msg="CreateContainer within sandbox \"7888208b02aaf99e125221aaee9d816b7828d406d497594259cdcb543a798623\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58181b64486c532d415947ae9b3ffd9f4dd603eef596a9e5d89287c0d8db1613\"" Jan 30 14:21:32.664513 containerd[1802]: time="2025-01-30T14:21:32.664482715Z" level=info msg="StartContainer for \"58181b64486c532d415947ae9b3ffd9f4dd603eef596a9e5d89287c0d8db1613\"" Jan 30 14:21:32.665173 containerd[1802]: time="2025-01-30T14:21:32.665128336Z" level=info msg="CreateContainer within sandbox \"ac0a7725010262b618a830ab957dc40fb04c085f71baf7eb2fa0e8995b3556a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"213c8deaf2ea06ac5a09df919d60ae9445088ce3b7706bf178189ddba093b9e2\"" Jan 30 14:21:32.665368 containerd[1802]: time="2025-01-30T14:21:32.665329248Z" level=info msg="StartContainer for \"213c8deaf2ea06ac5a09df919d60ae9445088ce3b7706bf178189ddba093b9e2\"" Jan 30 14:21:32.665958 containerd[1802]: time="2025-01-30T14:21:32.665942695Z" level=info msg="CreateContainer within sandbox \"cf196d2c11b2de841faae18149d51648d07277314ea1a464a6cd020a953320e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"307819b4954bbb074973b82cfa09941ac6c7d57632e001a9522812690b535cfa\"" Jan 30 14:21:32.666151 containerd[1802]: time="2025-01-30T14:21:32.666112364Z" level=info msg="StartContainer for \"307819b4954bbb074973b82cfa09941ac6c7d57632e001a9522812690b535cfa\"" Jan 30 14:21:32.668628 kubelet[2629]: W0130 14:21:32.668594 2629 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b3fea05ed8&limit=500&resourceVersion=0": dial tcp 139.178.70.237:6443: connect: connection refused Jan 30 14:21:32.668694 kubelet[2629]: E0130 14:21:32.668637 2629 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-b3fea05ed8&limit=500&resourceVersion=0\": dial tcp 139.178.70.237:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:21:32.695597 systemd[1]: Started cri-containerd-213c8deaf2ea06ac5a09df919d60ae9445088ce3b7706bf178189ddba093b9e2.scope - libcontainer container 213c8deaf2ea06ac5a09df919d60ae9445088ce3b7706bf178189ddba093b9e2. Jan 30 14:21:32.696146 systemd[1]: Started cri-containerd-307819b4954bbb074973b82cfa09941ac6c7d57632e001a9522812690b535cfa.scope - libcontainer container 307819b4954bbb074973b82cfa09941ac6c7d57632e001a9522812690b535cfa. Jan 30 14:21:32.696740 systemd[1]: Started cri-containerd-58181b64486c532d415947ae9b3ffd9f4dd603eef596a9e5d89287c0d8db1613.scope - libcontainer container 58181b64486c532d415947ae9b3ffd9f4dd603eef596a9e5d89287c0d8db1613. Jan 30 14:21:32.719907 containerd[1802]: time="2025-01-30T14:21:32.719878045Z" level=info msg="StartContainer for \"213c8deaf2ea06ac5a09df919d60ae9445088ce3b7706bf178189ddba093b9e2\" returns successfully" Jan 30 14:21:32.722005 containerd[1802]: time="2025-01-30T14:21:32.721984181Z" level=info msg="StartContainer for \"58181b64486c532d415947ae9b3ffd9f4dd603eef596a9e5d89287c0d8db1613\" returns successfully" Jan 30 14:21:32.722078 containerd[1802]: time="2025-01-30T14:21:32.721984200Z" level=info msg="StartContainer for \"307819b4954bbb074973b82cfa09941ac6c7d57632e001a9522812690b535cfa\" returns successfully" Jan 30 14:21:33.150548 kubelet[2629]: I0130 14:21:33.150530 2629 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:33.365785 kubelet[2629]: E0130 14:21:33.365767 2629 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:33.464820 kubelet[2629]: I0130 14:21:33.464803 2629 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:33.464820 kubelet[2629]: E0130 14:21:33.464822 2629 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-a-b3fea05ed8\": node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:33.467090 kubelet[2629]: E0130 14:21:33.467077 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:33.567575 kubelet[2629]: E0130 14:21:33.567534 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:33.577705 kubelet[2629]: E0130 14:21:33.577677 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:33.578831 kubelet[2629]: E0130 14:21:33.578766 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:33.580276 kubelet[2629]: E0130 14:21:33.580222 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:33.668088 kubelet[2629]: E0130 14:21:33.668032 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:33.769251 kubelet[2629]: E0130 14:21:33.769035 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:33.869293 kubelet[2629]: E0130 14:21:33.869174 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:33.969572 kubelet[2629]: E0130 14:21:33.969421 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.070111 kubelet[2629]: E0130 14:21:34.069886 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.170791 kubelet[2629]: E0130 14:21:34.170678 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.271328 kubelet[2629]: E0130 14:21:34.271176 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.371602 kubelet[2629]: E0130 14:21:34.371388 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.471512 kubelet[2629]: E0130 14:21:34.471467 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.571695 kubelet[2629]: E0130 14:21:34.571607 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.588289 kubelet[2629]: E0130 14:21:34.588229 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:34.588581 kubelet[2629]: E0130 14:21:34.588527 2629 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:34.672507 kubelet[2629]: E0130 14:21:34.672295 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.773431 kubelet[2629]: E0130 14:21:34.773355 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.874537 kubelet[2629]: E0130 14:21:34.874442 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:34.975619 kubelet[2629]: E0130 14:21:34.975513 2629 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:35.060581 kubelet[2629]: I0130 14:21:35.060493 2629 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:35.072544 kubelet[2629]: W0130 14:21:35.072484 2629 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:21:35.072799 kubelet[2629]: I0130 14:21:35.072762 2629 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:35.078480 kubelet[2629]: W0130 14:21:35.078430 2629 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:21:35.078750 kubelet[2629]: I0130 14:21:35.078636 2629 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:35.084915 kubelet[2629]: W0130 14:21:35.084867 2629 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:21:35.553189 kubelet[2629]: I0130 14:21:35.553117 2629 apiserver.go:52] "Watching apiserver" Jan 30 14:21:35.561164 kubelet[2629]: I0130 14:21:35.561079 2629 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:21:35.730553 systemd[1]: Reloading requested from client PID 2950 ('systemctl') (unit session-11.scope)... Jan 30 14:21:35.730560 systemd[1]: Reloading... Jan 30 14:21:35.796312 zram_generator::config[2989]: No configuration found. Jan 30 14:21:35.872351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:21:35.939592 systemd[1]: Reloading finished in 208 ms. Jan 30 14:21:35.961004 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:35.969343 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:21:35.969449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:35.993790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:21:36.239996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:21:36.242341 (kubelet)[3054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:21:36.263273 kubelet[3054]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:21:36.263273 kubelet[3054]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 14:21:36.263273 kubelet[3054]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:21:36.263520 kubelet[3054]: I0130 14:21:36.263327 3054 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:21:36.266778 kubelet[3054]: I0130 14:21:36.266765 3054 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 14:21:36.266778 kubelet[3054]: I0130 14:21:36.266777 3054 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:21:36.266921 kubelet[3054]: I0130 14:21:36.266916 3054 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 14:21:36.267589 kubelet[3054]: I0130 14:21:36.267582 3054 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:21:36.268749 kubelet[3054]: I0130 14:21:36.268713 3054 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:21:36.270766 kubelet[3054]: E0130 14:21:36.270749 3054 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:21:36.270800 kubelet[3054]: I0130 14:21:36.270767 3054 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:21:36.277140 kubelet[3054]: I0130 14:21:36.277104 3054 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:21:36.277218 kubelet[3054]: I0130 14:21:36.277204 3054 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:21:36.277370 kubelet[3054]: I0130 14:21:36.277218 3054 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-b3fea05ed8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:21:36.277370 kubelet[3054]: I0130 14:21:36.277322 3054 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:21:36.277370 kubelet[3054]: I0130 14:21:36.277328 3054 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 14:21:36.277370 kubelet[3054]: I0130 14:21:36.277353 3054 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:21:36.277481 kubelet[3054]: I0130 14:21:36.277450 3054 kubelet.go:446] "Attempting to sync node with API server" Jan 30 14:21:36.277481 kubelet[3054]: I0130 14:21:36.277457 3054 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:21:36.277481 kubelet[3054]: I0130 14:21:36.277467 3054 kubelet.go:352] "Adding apiserver pod source" Jan 30 14:21:36.277481 kubelet[3054]: I0130 14:21:36.277473 3054 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:21:36.277771 kubelet[3054]: I0130 14:21:36.277758 3054 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:21:36.278042 kubelet[3054]: I0130 14:21:36.278033 3054 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:21:36.278551 kubelet[3054]: I0130 14:21:36.278539 3054 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 14:21:36.278732 kubelet[3054]: I0130 14:21:36.278636 3054 server.go:1287] "Started kubelet" Jan 30 14:21:36.278775 kubelet[3054]: I0130 14:21:36.278735 3054 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:21:36.279006 kubelet[3054]: I0130 14:21:36.278732 3054 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:21:36.279321 kubelet[3054]: I0130 14:21:36.279307 3054 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:21:36.279742 kubelet[3054]: I0130 14:21:36.279734 3054 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:21:36.279777 kubelet[3054]: I0130 14:21:36.279746 3054 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:21:36.279845 kubelet[3054]: I0130 14:21:36.279835 3054 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 14:21:36.279888 kubelet[3054]: I0130 14:21:36.279877 3054 server.go:490] "Adding debug handlers to kubelet server" Jan 30 14:21:36.279923 kubelet[3054]: I0130 14:21:36.279890 3054 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:21:36.279952 kubelet[3054]: E0130 14:21:36.279918 3054 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-b3fea05ed8\" not found" Jan 30 14:21:36.279985 kubelet[3054]: I0130 14:21:36.279977 3054 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:21:36.280547 kubelet[3054]: E0130 14:21:36.280433 3054 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:21:36.281522 kubelet[3054]: I0130 14:21:36.281511 3054 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:21:36.281576 kubelet[3054]: I0130 14:21:36.281565 3054 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:21:36.282198 kubelet[3054]: I0130 14:21:36.282186 3054 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:21:36.285515 kubelet[3054]: I0130 14:21:36.285492 3054 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:21:36.286085 kubelet[3054]: I0130 14:21:36.286073 3054 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:21:36.286141 kubelet[3054]: I0130 14:21:36.286094 3054 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 14:21:36.286141 kubelet[3054]: I0130 14:21:36.286107 3054 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 14:21:36.286141 kubelet[3054]: I0130 14:21:36.286112 3054 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 14:21:36.286197 kubelet[3054]: E0130 14:21:36.286138 3054 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:21:36.296849 kubelet[3054]: I0130 14:21:36.296801 3054 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 14:21:36.296849 kubelet[3054]: I0130 14:21:36.296810 3054 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 14:21:36.296849 kubelet[3054]: I0130 14:21:36.296819 3054 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:21:36.296970 kubelet[3054]: I0130 14:21:36.296912 3054 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:21:36.296970 kubelet[3054]: I0130 14:21:36.296919 3054 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:21:36.296970 kubelet[3054]: I0130 14:21:36.296935 3054 policy_none.go:49] "None policy: Start" Jan 30 14:21:36.296970 kubelet[3054]: I0130 14:21:36.296939 3054 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 14:21:36.296970 kubelet[3054]: I0130 14:21:36.296945 3054 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:21:36.297043 kubelet[3054]: I0130 14:21:36.297000 3054 state_mem.go:75] "Updated machine memory state" Jan 30 14:21:36.298865 kubelet[3054]: I0130 14:21:36.298834 3054 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:21:36.298944 kubelet[3054]: I0130 14:21:36.298913 3054 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:21:36.298944 kubelet[3054]: I0130 14:21:36.298919 3054 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:21:36.299027 kubelet[3054]: I0130 14:21:36.299018 3054 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:21:36.299310 kubelet[3054]: E0130 14:21:36.299297 3054 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 14:21:36.387883 kubelet[3054]: I0130 14:21:36.387810 3054 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.388250 kubelet[3054]: I0130 14:21:36.387906 3054 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.388250 kubelet[3054]: I0130 14:21:36.388051 3054 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.396728 kubelet[3054]: W0130 14:21:36.396670 3054 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:21:36.396728 kubelet[3054]: W0130 14:21:36.396721 3054 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:21:36.397118 kubelet[3054]: E0130 14:21:36.396812 3054 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.397118 kubelet[3054]: W0130 14:21:36.396817 3054 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:21:36.397118 kubelet[3054]: E0130 14:21:36.396848 3054 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.397118 kubelet[3054]: E0130 14:21:36.396989 3054 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.0-a-b3fea05ed8\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.406664 kubelet[3054]: I0130 14:21:36.406610 3054 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.415916 kubelet[3054]: I0130 14:21:36.415867 3054 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.416134 kubelet[3054]: I0130 14:21:36.416005 3054 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.582459 kubelet[3054]: I0130 14:21:36.582184 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6886263005d75c95ea8c57aa01eb6bf4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-b3fea05ed8\" (UID: \"6886263005d75c95ea8c57aa01eb6bf4\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.582459 kubelet[3054]: I0130 14:21:36.582290 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdbe53db0e978e8a91992c9420885a56-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" (UID: \"fdbe53db0e978e8a91992c9420885a56\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.582459 kubelet[3054]: I0130 14:21:36.582414 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.583138 kubelet[3054]: I0130 14:21:36.582521 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.583138 kubelet[3054]: I0130 14:21:36.582625 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.583138 kubelet[3054]: I0130 14:21:36.582713 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.583138 kubelet[3054]: I0130 14:21:36.582804 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdbe53db0e978e8a91992c9420885a56-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" (UID: \"fdbe53db0e978e8a91992c9420885a56\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.583138 kubelet[3054]: I0130 14:21:36.582938 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdbe53db0e978e8a91992c9420885a56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" (UID: \"fdbe53db0e978e8a91992c9420885a56\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:36.583657 kubelet[3054]: I0130 14:21:36.583027 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c378a3de1c8a29d489942677eeb8c426-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-b3fea05ed8\" (UID: \"c378a3de1c8a29d489942677eeb8c426\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:37.278775 kubelet[3054]: I0130 14:21:37.278713 3054 apiserver.go:52] "Watching apiserver" Jan 30 14:21:37.290319 kubelet[3054]: I0130 14:21:37.290283 3054 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:37.295277 kubelet[3054]: W0130 14:21:37.295206 3054 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:21:37.295415 kubelet[3054]: E0130 14:21:37.295327 3054 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.0-a-b3fea05ed8\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" Jan 30 14:21:37.334364 kubelet[3054]: I0130 14:21:37.334295 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-b3fea05ed8" podStartSLOduration=2.334272347 podStartE2EDuration="2.334272347s" podCreationTimestamp="2025-01-30 14:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:21:37.334262574 +0000 UTC m=+1.090075063" watchObservedRunningTime="2025-01-30 14:21:37.334272347 +0000 UTC m=+1.090084836" Jan 30 14:21:37.338293 kubelet[3054]: I0130 14:21:37.338228 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-b3fea05ed8" podStartSLOduration=2.338217528 podStartE2EDuration="2.338217528s" podCreationTimestamp="2025-01-30 14:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:21:37.338130558 +0000 UTC m=+1.093943047" watchObservedRunningTime="2025-01-30 14:21:37.338217528 +0000 UTC m=+1.094030014" Jan 30 14:21:37.342966 kubelet[3054]: I0130 14:21:37.342909 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-b3fea05ed8" podStartSLOduration=2.342899116 podStartE2EDuration="2.342899116s" podCreationTimestamp="2025-01-30 14:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:21:37.342849088 +0000 UTC m=+1.098661577" watchObservedRunningTime="2025-01-30 14:21:37.342899116 +0000 UTC m=+1.098711604" Jan 30 14:21:37.381174 kubelet[3054]: I0130 14:21:37.381103 3054 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:21:40.395694 sudo[2078]: pam_unix(sudo:session): session closed for user root Jan 30 14:21:40.396551 sshd[2075]: pam_unix(sshd:session): session closed for user core Jan 30 14:21:40.398467 systemd[1]: sshd@8-139.178.70.237:22-147.75.109.163:54896.service: Deactivated successfully. Jan 30 14:21:40.399262 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:21:40.399386 systemd[1]: session-11.scope: Consumed 2.687s CPU time, 164.2M memory peak, 0B memory swap peak. Jan 30 14:21:40.399763 systemd-logind[1792]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:21:40.400416 systemd-logind[1792]: Removed session 11. Jan 30 14:21:42.495778 kubelet[3054]: I0130 14:21:42.495714 3054 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:21:42.496793 containerd[1802]: time="2025-01-30T14:21:42.496528485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:21:42.497426 kubelet[3054]: I0130 14:21:42.497006 3054 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:21:42.889900 systemd[1]: Created slice kubepods-besteffort-podfc6154dc_c743_4fa1_8e9e_168321ddf0a9.slice - libcontainer container kubepods-besteffort-podfc6154dc_c743_4fa1_8e9e_168321ddf0a9.slice. Jan 30 14:21:42.928487 kubelet[3054]: I0130 14:21:42.928420 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc6154dc-c743-4fa1-8e9e-168321ddf0a9-xtables-lock\") pod \"kube-proxy-5ncz4\" (UID: \"fc6154dc-c743-4fa1-8e9e-168321ddf0a9\") " pod="kube-system/kube-proxy-5ncz4" Jan 30 14:21:42.928487 kubelet[3054]: I0130 14:21:42.928463 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtgd\" (UniqueName: \"kubernetes.io/projected/fc6154dc-c743-4fa1-8e9e-168321ddf0a9-kube-api-access-4vtgd\") pod \"kube-proxy-5ncz4\" (UID: \"fc6154dc-c743-4fa1-8e9e-168321ddf0a9\") " pod="kube-system/kube-proxy-5ncz4" Jan 30 14:21:42.928671 kubelet[3054]: I0130 14:21:42.928496 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc6154dc-c743-4fa1-8e9e-168321ddf0a9-kube-proxy\") pod \"kube-proxy-5ncz4\" (UID: \"fc6154dc-c743-4fa1-8e9e-168321ddf0a9\") " pod="kube-system/kube-proxy-5ncz4" Jan 30 14:21:42.928671 kubelet[3054]: I0130 14:21:42.928519 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc6154dc-c743-4fa1-8e9e-168321ddf0a9-lib-modules\") pod \"kube-proxy-5ncz4\" (UID: \"fc6154dc-c743-4fa1-8e9e-168321ddf0a9\") " pod="kube-system/kube-proxy-5ncz4" Jan 30 14:21:43.041374 kubelet[3054]: E0130 14:21:43.041268 3054 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 14:21:43.041374 kubelet[3054]: E0130 14:21:43.041353 3054 projected.go:194] Error preparing data for projected volume kube-api-access-4vtgd for pod kube-system/kube-proxy-5ncz4: configmap "kube-root-ca.crt" not found Jan 30 14:21:43.041715 kubelet[3054]: E0130 14:21:43.041487 3054 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc6154dc-c743-4fa1-8e9e-168321ddf0a9-kube-api-access-4vtgd podName:fc6154dc-c743-4fa1-8e9e-168321ddf0a9 nodeName:}" failed. No retries permitted until 2025-01-30 14:21:43.54143945 +0000 UTC m=+7.297252007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4vtgd" (UniqueName: "kubernetes.io/projected/fc6154dc-c743-4fa1-8e9e-168321ddf0a9-kube-api-access-4vtgd") pod "kube-proxy-5ncz4" (UID: "fc6154dc-c743-4fa1-8e9e-168321ddf0a9") : configmap "kube-root-ca.crt" not found Jan 30 14:21:43.644829 systemd[1]: Created slice kubepods-besteffort-pod0eddb154_861d_4ba9_84e5_da6cfac2557c.slice - libcontainer container kubepods-besteffort-pod0eddb154_861d_4ba9_84e5_da6cfac2557c.slice. Jan 30 14:21:43.734784 kubelet[3054]: I0130 14:21:43.734694 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0eddb154-861d-4ba9-84e5-da6cfac2557c-var-lib-calico\") pod \"tigera-operator-7d68577dc5-n6v89\" (UID: \"0eddb154-861d-4ba9-84e5-da6cfac2557c\") " pod="tigera-operator/tigera-operator-7d68577dc5-n6v89" Jan 30 14:21:43.735720 kubelet[3054]: I0130 14:21:43.734815 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fp52\" (UniqueName: \"kubernetes.io/projected/0eddb154-861d-4ba9-84e5-da6cfac2557c-kube-api-access-7fp52\") pod \"tigera-operator-7d68577dc5-n6v89\" (UID: \"0eddb154-861d-4ba9-84e5-da6cfac2557c\") " pod="tigera-operator/tigera-operator-7d68577dc5-n6v89" Jan 30 14:21:43.806700 containerd[1802]: time="2025-01-30T14:21:43.806574876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ncz4,Uid:fc6154dc-c743-4fa1-8e9e-168321ddf0a9,Namespace:kube-system,Attempt:0,}" Jan 30 14:21:43.821993 containerd[1802]: time="2025-01-30T14:21:43.821924684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:21:43.821993 containerd[1802]: time="2025-01-30T14:21:43.821956123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:21:43.821993 containerd[1802]: time="2025-01-30T14:21:43.821967701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:43.822116 containerd[1802]: time="2025-01-30T14:21:43.822016384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:43.840419 systemd[1]: Started cri-containerd-5dac110d0b3007160c31b7ced0dba3881eb2fe538a9651e1406b1b87892a84f7.scope - libcontainer container 5dac110d0b3007160c31b7ced0dba3881eb2fe538a9651e1406b1b87892a84f7. Jan 30 14:21:43.853476 containerd[1802]: time="2025-01-30T14:21:43.853449917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ncz4,Uid:fc6154dc-c743-4fa1-8e9e-168321ddf0a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dac110d0b3007160c31b7ced0dba3881eb2fe538a9651e1406b1b87892a84f7\"" Jan 30 14:21:43.855115 containerd[1802]: time="2025-01-30T14:21:43.855095160Z" level=info msg="CreateContainer within sandbox \"5dac110d0b3007160c31b7ced0dba3881eb2fe538a9651e1406b1b87892a84f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:21:43.862131 containerd[1802]: time="2025-01-30T14:21:43.862091218Z" level=info msg="CreateContainer within sandbox \"5dac110d0b3007160c31b7ced0dba3881eb2fe538a9651e1406b1b87892a84f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1bd258fff5831e5576b38d82d210ee93b983dae4656d5e4a71c0d0075bfa8cc\"" Jan 30 14:21:43.862360 containerd[1802]: time="2025-01-30T14:21:43.862346964Z" level=info msg="StartContainer for \"a1bd258fff5831e5576b38d82d210ee93b983dae4656d5e4a71c0d0075bfa8cc\"" Jan 30 14:21:43.890496 systemd[1]: Started cri-containerd-a1bd258fff5831e5576b38d82d210ee93b983dae4656d5e4a71c0d0075bfa8cc.scope - libcontainer container a1bd258fff5831e5576b38d82d210ee93b983dae4656d5e4a71c0d0075bfa8cc. Jan 30 14:21:43.906304 containerd[1802]: time="2025-01-30T14:21:43.906240708Z" level=info msg="StartContainer for \"a1bd258fff5831e5576b38d82d210ee93b983dae4656d5e4a71c0d0075bfa8cc\" returns successfully" Jan 30 14:21:43.946953 containerd[1802]: time="2025-01-30T14:21:43.946892479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-n6v89,Uid:0eddb154-861d-4ba9-84e5-da6cfac2557c,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:21:43.956383 containerd[1802]: time="2025-01-30T14:21:43.956173198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:21:43.956383 containerd[1802]: time="2025-01-30T14:21:43.956322887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:21:43.956383 containerd[1802]: time="2025-01-30T14:21:43.956332059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:43.956383 containerd[1802]: time="2025-01-30T14:21:43.956374944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:43.973652 systemd[1]: Started cri-containerd-ca6c63ca70978122325ad94a46d2bd98ce9c0284bcf55d34e66217cb6e89b316.scope - libcontainer container ca6c63ca70978122325ad94a46d2bd98ce9c0284bcf55d34e66217cb6e89b316. Jan 30 14:21:43.995440 containerd[1802]: time="2025-01-30T14:21:43.995413209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-n6v89,Uid:0eddb154-861d-4ba9-84e5-da6cfac2557c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ca6c63ca70978122325ad94a46d2bd98ce9c0284bcf55d34e66217cb6e89b316\"" Jan 30 14:21:43.996183 containerd[1802]: time="2025-01-30T14:21:43.996171075Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:21:44.314161 kubelet[3054]: I0130 14:21:44.314125 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5ncz4" podStartSLOduration=2.314113495 podStartE2EDuration="2.314113495s" podCreationTimestamp="2025-01-30 14:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:21:44.31391587 +0000 UTC m=+8.069728366" watchObservedRunningTime="2025-01-30 14:21:44.314113495 +0000 UTC m=+8.069925986" Jan 30 14:21:45.798502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708350768.mount: Deactivated successfully. Jan 30 14:21:46.021077 containerd[1802]: time="2025-01-30T14:21:46.021023831Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:46.021286 containerd[1802]: time="2025-01-30T14:21:46.021166469Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 14:21:46.021578 containerd[1802]: time="2025-01-30T14:21:46.021539132Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:46.022669 containerd[1802]: time="2025-01-30T14:21:46.022630137Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:46.023108 containerd[1802]: time="2025-01-30T14:21:46.023069457Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.026882012s" Jan 30 14:21:46.023108 containerd[1802]: time="2025-01-30T14:21:46.023086697Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 14:21:46.024056 containerd[1802]: time="2025-01-30T14:21:46.024043071Z" level=info msg="CreateContainer within sandbox \"ca6c63ca70978122325ad94a46d2bd98ce9c0284bcf55d34e66217cb6e89b316\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:21:46.027870 containerd[1802]: time="2025-01-30T14:21:46.027827848Z" level=info msg="CreateContainer within sandbox \"ca6c63ca70978122325ad94a46d2bd98ce9c0284bcf55d34e66217cb6e89b316\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"28b682500ad92abc61ebc18bb228362f17e7fbc2827879b35159fe159b99a9b7\"" Jan 30 14:21:46.028081 containerd[1802]: time="2025-01-30T14:21:46.028042811Z" level=info msg="StartContainer for \"28b682500ad92abc61ebc18bb228362f17e7fbc2827879b35159fe159b99a9b7\"" Jan 30 14:21:46.056421 systemd[1]: Started cri-containerd-28b682500ad92abc61ebc18bb228362f17e7fbc2827879b35159fe159b99a9b7.scope - libcontainer container 28b682500ad92abc61ebc18bb228362f17e7fbc2827879b35159fe159b99a9b7. Jan 30 14:21:46.067417 containerd[1802]: time="2025-01-30T14:21:46.067395045Z" level=info msg="StartContainer for \"28b682500ad92abc61ebc18bb228362f17e7fbc2827879b35159fe159b99a9b7\" returns successfully" Jan 30 14:21:46.849435 kubelet[3054]: I0130 14:21:46.849350 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-n6v89" podStartSLOduration=1.821797482 podStartE2EDuration="3.849315352s" podCreationTimestamp="2025-01-30 14:21:43 +0000 UTC" firstStartedPulling="2025-01-30 14:21:43.995966808 +0000 UTC m=+7.751779293" lastFinishedPulling="2025-01-30 14:21:46.023484675 +0000 UTC m=+9.779297163" observedRunningTime="2025-01-30 14:21:46.31896036 +0000 UTC m=+10.074772849" watchObservedRunningTime="2025-01-30 14:21:46.849315352 +0000 UTC m=+10.605127877" Jan 30 14:21:49.045279 systemd[1]: Created slice kubepods-besteffort-podd6784368_4497_4c74_8381_80774a435e4c.slice - libcontainer container kubepods-besteffort-podd6784368_4497_4c74_8381_80774a435e4c.slice. Jan 30 14:21:49.053086 systemd[1]: Created slice kubepods-besteffort-pod60341076_28ed_4af7_8389_c4d9804d5409.slice - libcontainer container kubepods-besteffort-pod60341076_28ed_4af7_8389_c4d9804d5409.slice. Jan 30 14:21:49.067632 kubelet[3054]: E0130 14:21:49.067561 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrdfq" podUID="2025d343-9493-4be3-aac1-dde8efb093f7" Jan 30 14:21:49.069935 kubelet[3054]: I0130 14:21:49.069916 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzdk\" (UniqueName: \"kubernetes.io/projected/d6784368-4497-4c74-8381-80774a435e4c-kube-api-access-glzdk\") pod \"calico-typha-847c6c77b5-x9qhs\" (UID: \"d6784368-4497-4c74-8381-80774a435e4c\") " pod="calico-system/calico-typha-847c6c77b5-x9qhs" Jan 30 14:21:49.070027 kubelet[3054]: I0130 14:21:49.069941 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-policysync\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070027 kubelet[3054]: I0130 14:21:49.069954 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-cni-bin-dir\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070027 kubelet[3054]: I0130 14:21:49.069967 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-var-lib-calico\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070027 kubelet[3054]: I0130 14:21:49.069981 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-flexvol-driver-host\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070027 kubelet[3054]: I0130 14:21:49.069992 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhjh\" (UniqueName: \"kubernetes.io/projected/60341076-28ed-4af7-8389-c4d9804d5409-kube-api-access-9jhjh\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070148 kubelet[3054]: I0130 14:21:49.070006 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-lib-modules\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070148 kubelet[3054]: I0130 14:21:49.070026 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60341076-28ed-4af7-8389-c4d9804d5409-tigera-ca-bundle\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070148 kubelet[3054]: I0130 14:21:49.070081 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-cni-net-dir\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070148 kubelet[3054]: I0130 14:21:49.070114 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-var-run-calico\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070148 kubelet[3054]: I0130 14:21:49.070135 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-xtables-lock\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070256 kubelet[3054]: I0130 14:21:49.070157 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60341076-28ed-4af7-8389-c4d9804d5409-cni-log-dir\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.070256 kubelet[3054]: I0130 14:21:49.070177 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d6784368-4497-4c74-8381-80774a435e4c-typha-certs\") pod \"calico-typha-847c6c77b5-x9qhs\" (UID: \"d6784368-4497-4c74-8381-80774a435e4c\") " pod="calico-system/calico-typha-847c6c77b5-x9qhs" Jan 30 14:21:49.070256 kubelet[3054]: I0130 14:21:49.070199 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6784368-4497-4c74-8381-80774a435e4c-tigera-ca-bundle\") pod \"calico-typha-847c6c77b5-x9qhs\" (UID: \"d6784368-4497-4c74-8381-80774a435e4c\") " pod="calico-system/calico-typha-847c6c77b5-x9qhs" Jan 30 14:21:49.070256 kubelet[3054]: I0130 14:21:49.070221 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60341076-28ed-4af7-8389-c4d9804d5409-node-certs\") pod \"calico-node-vmdzk\" (UID: \"60341076-28ed-4af7-8389-c4d9804d5409\") " pod="calico-system/calico-node-vmdzk" Jan 30 14:21:49.170786 kubelet[3054]: I0130 14:21:49.170726 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46pz\" (UniqueName: \"kubernetes.io/projected/2025d343-9493-4be3-aac1-dde8efb093f7-kube-api-access-x46pz\") pod \"csi-node-driver-rrdfq\" (UID: \"2025d343-9493-4be3-aac1-dde8efb093f7\") " pod="calico-system/csi-node-driver-rrdfq" Jan 30 14:21:49.171046 kubelet[3054]: I0130 14:21:49.170840 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2025d343-9493-4be3-aac1-dde8efb093f7-kubelet-dir\") pod \"csi-node-driver-rrdfq\" (UID: \"2025d343-9493-4be3-aac1-dde8efb093f7\") " pod="calico-system/csi-node-driver-rrdfq" Jan 30 14:21:49.171410 kubelet[3054]: I0130 14:21:49.171363 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2025d343-9493-4be3-aac1-dde8efb093f7-varrun\") pod \"csi-node-driver-rrdfq\" (UID: \"2025d343-9493-4be3-aac1-dde8efb093f7\") " pod="calico-system/csi-node-driver-rrdfq" Jan 30 14:21:49.171580 kubelet[3054]: I0130 14:21:49.171473 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2025d343-9493-4be3-aac1-dde8efb093f7-registration-dir\") pod \"csi-node-driver-rrdfq\" (UID: \"2025d343-9493-4be3-aac1-dde8efb093f7\") " pod="calico-system/csi-node-driver-rrdfq" Jan 30 14:21:49.172294 kubelet[3054]: E0130 14:21:49.172249 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.172522 kubelet[3054]: W0130 14:21:49.172290 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.172666 kubelet[3054]: E0130 14:21:49.172596 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.173157 kubelet[3054]: E0130 14:21:49.173120 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.173339 kubelet[3054]: W0130 14:21:49.173153 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.173339 kubelet[3054]: E0130 14:21:49.173199 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.173691 kubelet[3054]: E0130 14:21:49.173651 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.173691 kubelet[3054]: W0130 14:21:49.173683 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.173955 kubelet[3054]: E0130 14:21:49.173728 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.174241 kubelet[3054]: E0130 14:21:49.174209 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.174241 kubelet[3054]: W0130 14:21:49.174230 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.174556 kubelet[3054]: E0130 14:21:49.174286 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.174688 kubelet[3054]: E0130 14:21:49.174666 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.174815 kubelet[3054]: W0130 14:21:49.174687 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.174815 kubelet[3054]: E0130 14:21:49.174746 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.175126 kubelet[3054]: E0130 14:21:49.175090 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.175126 kubelet[3054]: W0130 14:21:49.175121 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.175429 kubelet[3054]: E0130 14:21:49.175191 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.175674 kubelet[3054]: E0130 14:21:49.175635 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.175674 kubelet[3054]: W0130 14:21:49.175667 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.175934 kubelet[3054]: E0130 14:21:49.175742 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.176125 kubelet[3054]: E0130 14:21:49.176094 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.176125 kubelet[3054]: W0130 14:21:49.176116 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.176426 kubelet[3054]: E0130 14:21:49.176143 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.176641 kubelet[3054]: E0130 14:21:49.176600 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.176742 kubelet[3054]: W0130 14:21:49.176639 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.176742 kubelet[3054]: E0130 14:21:49.176684 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.176873 kubelet[3054]: I0130 14:21:49.176744 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2025d343-9493-4be3-aac1-dde8efb093f7-socket-dir\") pod \"csi-node-driver-rrdfq\" (UID: \"2025d343-9493-4be3-aac1-dde8efb093f7\") " pod="calico-system/csi-node-driver-rrdfq" Jan 30 14:21:49.177256 kubelet[3054]: E0130 14:21:49.177225 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.177384 kubelet[3054]: W0130 14:21:49.177258 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.177384 kubelet[3054]: E0130 14:21:49.177299 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.177864 kubelet[3054]: E0130 14:21:49.177832 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.177956 kubelet[3054]: W0130 14:21:49.177870 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.177956 kubelet[3054]: E0130 14:21:49.177918 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.178473 kubelet[3054]: E0130 14:21:49.178414 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.178473 kubelet[3054]: W0130 14:21:49.178443 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.178734 kubelet[3054]: E0130 14:21:49.178483 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.178929 kubelet[3054]: E0130 14:21:49.178883 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.178929 kubelet[3054]: W0130 14:21:49.178912 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.179098 kubelet[3054]: E0130 14:21:49.178973 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.179384 kubelet[3054]: E0130 14:21:49.179296 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.179384 kubelet[3054]: W0130 14:21:49.179349 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.179616 kubelet[3054]: E0130 14:21:49.179397 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.179813 kubelet[3054]: E0130 14:21:49.179763 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.179813 kubelet[3054]: W0130 14:21:49.179790 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.179965 kubelet[3054]: E0130 14:21:49.179838 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.180261 kubelet[3054]: E0130 14:21:49.180225 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.180261 kubelet[3054]: W0130 14:21:49.180255 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.180549 kubelet[3054]: E0130 14:21:49.180357 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.180718 kubelet[3054]: E0130 14:21:49.180687 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.180718 kubelet[3054]: W0130 14:21:49.180717 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.180907 kubelet[3054]: E0130 14:21:49.180770 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.181206 kubelet[3054]: E0130 14:21:49.181175 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.181206 kubelet[3054]: W0130 14:21:49.181203 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.181494 kubelet[3054]: E0130 14:21:49.181270 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.181752 kubelet[3054]: E0130 14:21:49.181725 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.181831 kubelet[3054]: W0130 14:21:49.181755 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.181903 kubelet[3054]: E0130 14:21:49.181817 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.182164 kubelet[3054]: E0130 14:21:49.182142 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.182243 kubelet[3054]: W0130 14:21:49.182166 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.182338 kubelet[3054]: E0130 14:21:49.182231 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.182652 kubelet[3054]: E0130 14:21:49.182596 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.182652 kubelet[3054]: W0130 14:21:49.182621 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.182652 kubelet[3054]: E0130 14:21:49.182652 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.183090 kubelet[3054]: E0130 14:21:49.183063 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.183090 kubelet[3054]: W0130 14:21:49.183089 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.183372 kubelet[3054]: E0130 14:21:49.183118 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.183500 kubelet[3054]: E0130 14:21:49.183483 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.183647 kubelet[3054]: W0130 14:21:49.183509 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.183647 kubelet[3054]: E0130 14:21:49.183539 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.184178 kubelet[3054]: E0130 14:21:49.184150 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.184329 kubelet[3054]: W0130 14:21:49.184178 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.184329 kubelet[3054]: E0130 14:21:49.184209 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.186190 kubelet[3054]: E0130 14:21:49.186148 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.186190 kubelet[3054]: W0130 14:21:49.186181 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.186542 kubelet[3054]: E0130 14:21:49.186214 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.186712 kubelet[3054]: E0130 14:21:49.186679 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.186712 kubelet[3054]: W0130 14:21:49.186705 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.186927 kubelet[3054]: E0130 14:21:49.186737 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.281492 kubelet[3054]: E0130 14:21:49.281385 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.281492 kubelet[3054]: W0130 14:21:49.281428 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.281492 kubelet[3054]: E0130 14:21:49.281472 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.282183 kubelet[3054]: E0130 14:21:49.282100 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.282183 kubelet[3054]: W0130 14:21:49.282136 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.282183 kubelet[3054]: E0130 14:21:49.282176 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.282910 kubelet[3054]: E0130 14:21:49.282827 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.282910 kubelet[3054]: W0130 14:21:49.282864 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.282910 kubelet[3054]: E0130 14:21:49.282908 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.283602 kubelet[3054]: E0130 14:21:49.283518 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.283602 kubelet[3054]: W0130 14:21:49.283555 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.283602 kubelet[3054]: E0130 14:21:49.283597 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.284240 kubelet[3054]: E0130 14:21:49.284157 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.284240 kubelet[3054]: W0130 14:21:49.284194 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.284581 kubelet[3054]: E0130 14:21:49.284298 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.284886 kubelet[3054]: E0130 14:21:49.284801 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.284886 kubelet[3054]: W0130 14:21:49.284842 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.285167 kubelet[3054]: E0130 14:21:49.284963 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.285429 kubelet[3054]: E0130 14:21:49.285392 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.285429 kubelet[3054]: W0130 14:21:49.285424 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.285658 kubelet[3054]: E0130 14:21:49.285540 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.286074 kubelet[3054]: E0130 14:21:49.285993 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.286074 kubelet[3054]: W0130 14:21:49.286031 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.286074 kubelet[3054]: E0130 14:21:49.286079 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.286802 kubelet[3054]: E0130 14:21:49.286717 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.286802 kubelet[3054]: W0130 14:21:49.286755 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.286802 kubelet[3054]: E0130 14:21:49.286797 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.287474 kubelet[3054]: E0130 14:21:49.287419 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.287474 kubelet[3054]: W0130 14:21:49.287467 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.287823 kubelet[3054]: E0130 14:21:49.287528 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.288220 kubelet[3054]: E0130 14:21:49.288143 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.288220 kubelet[3054]: W0130 14:21:49.288186 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.288591 kubelet[3054]: E0130 14:21:49.288281 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.288860 kubelet[3054]: E0130 14:21:49.288808 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.288860 kubelet[3054]: W0130 14:21:49.288845 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.289159 kubelet[3054]: E0130 14:21:49.288971 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.289455 kubelet[3054]: E0130 14:21:49.289380 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.289455 kubelet[3054]: W0130 14:21:49.289406 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.289769 kubelet[3054]: E0130 14:21:49.289525 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.290036 kubelet[3054]: E0130 14:21:49.289950 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.290036 kubelet[3054]: W0130 14:21:49.289986 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.290347 kubelet[3054]: E0130 14:21:49.290098 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.290643 kubelet[3054]: E0130 14:21:49.290560 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.290643 kubelet[3054]: W0130 14:21:49.290596 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.290958 kubelet[3054]: E0130 14:21:49.290722 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.291174 kubelet[3054]: E0130 14:21:49.291085 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.291174 kubelet[3054]: W0130 14:21:49.291114 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.291481 kubelet[3054]: E0130 14:21:49.291186 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.291684 kubelet[3054]: E0130 14:21:49.291647 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.291827 kubelet[3054]: W0130 14:21:49.291683 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.291933 kubelet[3054]: E0130 14:21:49.291807 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.292247 kubelet[3054]: E0130 14:21:49.292216 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.292389 kubelet[3054]: W0130 14:21:49.292245 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.292389 kubelet[3054]: E0130 14:21:49.292345 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.292901 kubelet[3054]: E0130 14:21:49.292865 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.293020 kubelet[3054]: W0130 14:21:49.292903 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.293020 kubelet[3054]: E0130 14:21:49.292999 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.293595 kubelet[3054]: E0130 14:21:49.293513 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.293595 kubelet[3054]: W0130 14:21:49.293549 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.293889 kubelet[3054]: E0130 14:21:49.293676 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.294161 kubelet[3054]: E0130 14:21:49.294099 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.294161 kubelet[3054]: W0130 14:21:49.294132 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.294666 kubelet[3054]: E0130 14:21:49.294202 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.294866 kubelet[3054]: E0130 14:21:49.294709 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.294866 kubelet[3054]: W0130 14:21:49.294749 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.294866 kubelet[3054]: E0130 14:21:49.294791 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.295561 kubelet[3054]: E0130 14:21:49.295408 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.295561 kubelet[3054]: W0130 14:21:49.295437 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.295561 kubelet[3054]: E0130 14:21:49.295472 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.296118 kubelet[3054]: E0130 14:21:49.296090 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.296350 kubelet[3054]: W0130 14:21:49.296124 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.296350 kubelet[3054]: E0130 14:21:49.296159 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.296943 kubelet[3054]: E0130 14:21:49.296881 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.296943 kubelet[3054]: W0130 14:21:49.296918 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.297198 kubelet[3054]: E0130 14:21:49.296953 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.318714 kubelet[3054]: E0130 14:21:49.318660 3054 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:21:49.318714 kubelet[3054]: W0130 14:21:49.318703 3054 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:21:49.319086 kubelet[3054]: E0130 14:21:49.318739 3054 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:21:49.350118 containerd[1802]: time="2025-01-30T14:21:49.349998683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847c6c77b5-x9qhs,Uid:d6784368-4497-4c74-8381-80774a435e4c,Namespace:calico-system,Attempt:0,}" Jan 30 14:21:49.356559 containerd[1802]: time="2025-01-30T14:21:49.356522908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vmdzk,Uid:60341076-28ed-4af7-8389-c4d9804d5409,Namespace:calico-system,Attempt:0,}" Jan 30 14:21:49.379812 containerd[1802]: time="2025-01-30T14:21:49.379747743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:21:49.379812 containerd[1802]: time="2025-01-30T14:21:49.379778484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:21:49.379812 containerd[1802]: time="2025-01-30T14:21:49.379785975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:49.379916 containerd[1802]: time="2025-01-30T14:21:49.379862902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:49.381910 containerd[1802]: time="2025-01-30T14:21:49.381838885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:21:49.381971 containerd[1802]: time="2025-01-30T14:21:49.381901107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:21:49.382115 containerd[1802]: time="2025-01-30T14:21:49.382094107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:49.382160 containerd[1802]: time="2025-01-30T14:21:49.382147732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:21:49.400617 systemd[1]: Started cri-containerd-162ca223c9f85548d13175d6a24c26287f431fb174ed377fb3243d10da9ef786.scope - libcontainer container 162ca223c9f85548d13175d6a24c26287f431fb174ed377fb3243d10da9ef786. Jan 30 14:21:49.402183 systemd[1]: Started cri-containerd-1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27.scope - libcontainer container 1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27. Jan 30 14:21:49.411517 containerd[1802]: time="2025-01-30T14:21:49.411494603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vmdzk,Uid:60341076-28ed-4af7-8389-c4d9804d5409,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27\"" Jan 30 14:21:49.412264 containerd[1802]: time="2025-01-30T14:21:49.412250490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:21:49.422849 containerd[1802]: time="2025-01-30T14:21:49.422800445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847c6c77b5-x9qhs,Uid:d6784368-4497-4c74-8381-80774a435e4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"162ca223c9f85548d13175d6a24c26287f431fb174ed377fb3243d10da9ef786\"" Jan 30 14:21:50.183415 update_engine[1797]: I20250130 14:21:50.183261 1797 update_attempter.cc:509] Updating boot flags... Jan 30 14:21:50.216308 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3681) Jan 30 14:21:50.242310 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3682) Jan 30 14:21:50.268309 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3682) Jan 30 14:21:50.909950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366011257.mount: Deactivated successfully. Jan 30 14:21:50.949665 containerd[1802]: time="2025-01-30T14:21:50.949618456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:50.949860 containerd[1802]: time="2025-01-30T14:21:50.949817609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 14:21:50.950190 containerd[1802]: time="2025-01-30T14:21:50.950154147Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:50.951104 containerd[1802]: time="2025-01-30T14:21:50.951068281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:50.951543 containerd[1802]: time="2025-01-30T14:21:50.951505279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.539237789s" Jan 30 14:21:50.951543 containerd[1802]: time="2025-01-30T14:21:50.951523140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 14:21:50.952051 containerd[1802]: time="2025-01-30T14:21:50.952038786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:21:50.952546 containerd[1802]: time="2025-01-30T14:21:50.952534267Z" level=info msg="CreateContainer within sandbox \"1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:21:50.957352 containerd[1802]: time="2025-01-30T14:21:50.957304548Z" level=info msg="CreateContainer within sandbox \"1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2\"" Jan 30 14:21:50.957569 containerd[1802]: time="2025-01-30T14:21:50.957511425Z" level=info msg="StartContainer for \"352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2\"" Jan 30 14:21:50.984441 systemd[1]: Started cri-containerd-352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2.scope - libcontainer container 352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2. Jan 30 14:21:51.000273 containerd[1802]: time="2025-01-30T14:21:51.000245829Z" level=info msg="StartContainer for \"352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2\" returns successfully" Jan 30 14:21:51.007893 systemd[1]: cri-containerd-352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2.scope: Deactivated successfully. Jan 30 14:21:51.181240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2-rootfs.mount: Deactivated successfully. Jan 30 14:21:51.265695 containerd[1802]: time="2025-01-30T14:21:51.265627253Z" level=info msg="shim disconnected" id=352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2 namespace=k8s.io Jan 30 14:21:51.265695 containerd[1802]: time="2025-01-30T14:21:51.265657859Z" level=warning msg="cleaning up after shim disconnected" id=352e529510513947fcfab0063ac76936215a2992265222ef6a3c9d0aa8f231c2 namespace=k8s.io Jan 30 14:21:51.265695 containerd[1802]: time="2025-01-30T14:21:51.265663036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:21:51.287265 kubelet[3054]: E0130 14:21:51.287212 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrdfq" podUID="2025d343-9493-4be3-aac1-dde8efb093f7" Jan 30 14:21:52.553972 containerd[1802]: time="2025-01-30T14:21:52.553922851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:52.554177 containerd[1802]: time="2025-01-30T14:21:52.554061958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 14:21:52.554486 containerd[1802]: time="2025-01-30T14:21:52.554472257Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:52.555413 containerd[1802]: time="2025-01-30T14:21:52.555399598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:52.555828 containerd[1802]: time="2025-01-30T14:21:52.555815887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.603761327s" Jan 30 14:21:52.555864 containerd[1802]: time="2025-01-30T14:21:52.555829973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 14:21:52.556286 containerd[1802]: time="2025-01-30T14:21:52.556276254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:21:52.559231 containerd[1802]: time="2025-01-30T14:21:52.559207864Z" level=info msg="CreateContainer within sandbox \"162ca223c9f85548d13175d6a24c26287f431fb174ed377fb3243d10da9ef786\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:21:52.563438 containerd[1802]: time="2025-01-30T14:21:52.563421631Z" level=info msg="CreateContainer within sandbox \"162ca223c9f85548d13175d6a24c26287f431fb174ed377fb3243d10da9ef786\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"74ca85f975800e513b43906647aa4216eeabe2271a67076da891f4846156b652\"" Jan 30 14:21:52.563655 containerd[1802]: time="2025-01-30T14:21:52.563642371Z" level=info msg="StartContainer for \"74ca85f975800e513b43906647aa4216eeabe2271a67076da891f4846156b652\"" Jan 30 14:21:52.590567 systemd[1]: Started cri-containerd-74ca85f975800e513b43906647aa4216eeabe2271a67076da891f4846156b652.scope - libcontainer container 74ca85f975800e513b43906647aa4216eeabe2271a67076da891f4846156b652. Jan 30 14:21:52.617042 containerd[1802]: time="2025-01-30T14:21:52.616988201Z" level=info msg="StartContainer for \"74ca85f975800e513b43906647aa4216eeabe2271a67076da891f4846156b652\" returns successfully" Jan 30 14:21:53.287342 kubelet[3054]: E0130 14:21:53.287183 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrdfq" podUID="2025d343-9493-4be3-aac1-dde8efb093f7" Jan 30 14:21:53.338593 kubelet[3054]: I0130 14:21:53.338560 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-847c6c77b5-x9qhs" podStartSLOduration=1.205590445 podStartE2EDuration="4.338544724s" podCreationTimestamp="2025-01-30 14:21:49 +0000 UTC" firstStartedPulling="2025-01-30 14:21:49.423274903 +0000 UTC m=+13.179087388" lastFinishedPulling="2025-01-30 14:21:52.556229179 +0000 UTC m=+16.312041667" observedRunningTime="2025-01-30 14:21:53.33840929 +0000 UTC m=+17.094221785" watchObservedRunningTime="2025-01-30 14:21:53.338544724 +0000 UTC m=+17.094357210" Jan 30 14:21:54.333406 kubelet[3054]: I0130 14:21:54.333388 3054 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:21:54.852075 containerd[1802]: time="2025-01-30T14:21:54.852027503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:54.852265 containerd[1802]: time="2025-01-30T14:21:54.852231020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 14:21:54.852626 containerd[1802]: time="2025-01-30T14:21:54.852586559Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:54.853608 containerd[1802]: time="2025-01-30T14:21:54.853570165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:54.854000 containerd[1802]: time="2025-01-30T14:21:54.853958230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.297667382s" Jan 30 14:21:54.854000 containerd[1802]: time="2025-01-30T14:21:54.853973562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 14:21:54.854932 containerd[1802]: time="2025-01-30T14:21:54.854920640Z" level=info msg="CreateContainer within sandbox \"1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:21:54.859857 containerd[1802]: time="2025-01-30T14:21:54.859812405Z" level=info msg="CreateContainer within sandbox \"1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8\"" Jan 30 14:21:54.860071 containerd[1802]: time="2025-01-30T14:21:54.860025935Z" level=info msg="StartContainer for \"a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8\"" Jan 30 14:21:54.887480 systemd[1]: Started cri-containerd-a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8.scope - libcontainer container a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8. Jan 30 14:21:54.899457 containerd[1802]: time="2025-01-30T14:21:54.899435130Z" level=info msg="StartContainer for \"a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8\" returns successfully" Jan 30 14:21:55.286878 kubelet[3054]: E0130 14:21:55.286820 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrdfq" podUID="2025d343-9493-4be3-aac1-dde8efb093f7" Jan 30 14:21:55.396141 systemd[1]: cri-containerd-a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8.scope: Deactivated successfully. Jan 30 14:21:55.407218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8-rootfs.mount: Deactivated successfully. Jan 30 14:21:55.473436 kubelet[3054]: I0130 14:21:55.473377 3054 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 14:21:55.529178 systemd[1]: Created slice kubepods-besteffort-pode37efeb5_45cf_4c20_9a68_83d461b1575b.slice - libcontainer container kubepods-besteffort-pode37efeb5_45cf_4c20_9a68_83d461b1575b.slice. Jan 30 14:21:55.536271 systemd[1]: Created slice kubepods-burstable-pod4a9af570_d877_46a4_8392_c4f16e337c47.slice - libcontainer container kubepods-burstable-pod4a9af570_d877_46a4_8392_c4f16e337c47.slice. Jan 30 14:21:55.542051 systemd[1]: Created slice kubepods-burstable-pod9d71b019_e1d6_49af_8cc1_c191b20fbc5e.slice - libcontainer container kubepods-burstable-pod9d71b019_e1d6_49af_8cc1_c191b20fbc5e.slice. Jan 30 14:21:55.547025 systemd[1]: Created slice kubepods-besteffort-podaa5a76da_05ef_4313_b8e8_abf8bc713cb3.slice - libcontainer container kubepods-besteffort-podaa5a76da_05ef_4313_b8e8_abf8bc713cb3.slice. Jan 30 14:21:55.551006 systemd[1]: Created slice kubepods-besteffort-poddd528602_7e46_432f_8601_8c9ecb2abf83.slice - libcontainer container kubepods-besteffort-poddd528602_7e46_432f_8601_8c9ecb2abf83.slice. Jan 30 14:21:55.636711 kubelet[3054]: I0130 14:21:55.636600 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd528602-7e46-432f-8601-8c9ecb2abf83-calico-apiserver-certs\") pod \"calico-apiserver-5d9cd77888-nfhxt\" (UID: \"dd528602-7e46-432f-8601-8c9ecb2abf83\") " pod="calico-apiserver/calico-apiserver-5d9cd77888-nfhxt" Jan 30 14:21:55.636711 kubelet[3054]: I0130 14:21:55.636711 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e37efeb5-45cf-4c20-9a68-83d461b1575b-tigera-ca-bundle\") pod \"calico-kube-controllers-6bb7d49bff-mr47g\" (UID: \"e37efeb5-45cf-4c20-9a68-83d461b1575b\") " pod="calico-system/calico-kube-controllers-6bb7d49bff-mr47g" Jan 30 14:21:55.637203 kubelet[3054]: I0130 14:21:55.636858 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhb2d\" (UniqueName: \"kubernetes.io/projected/dd528602-7e46-432f-8601-8c9ecb2abf83-kube-api-access-lhb2d\") pod \"calico-apiserver-5d9cd77888-nfhxt\" (UID: \"dd528602-7e46-432f-8601-8c9ecb2abf83\") " pod="calico-apiserver/calico-apiserver-5d9cd77888-nfhxt" Jan 30 14:21:55.637203 kubelet[3054]: I0130 14:21:55.636945 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4grp\" (UniqueName: \"kubernetes.io/projected/e37efeb5-45cf-4c20-9a68-83d461b1575b-kube-api-access-s4grp\") pod \"calico-kube-controllers-6bb7d49bff-mr47g\" (UID: \"e37efeb5-45cf-4c20-9a68-83d461b1575b\") " pod="calico-system/calico-kube-controllers-6bb7d49bff-mr47g" Jan 30 14:21:55.637203 kubelet[3054]: I0130 14:21:55.636999 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcsw\" (UniqueName: \"kubernetes.io/projected/4a9af570-d877-46a4-8392-c4f16e337c47-kube-api-access-kjcsw\") pod \"coredns-668d6bf9bc-25zb4\" (UID: \"4a9af570-d877-46a4-8392-c4f16e337c47\") " pod="kube-system/coredns-668d6bf9bc-25zb4" Jan 30 14:21:55.637203 kubelet[3054]: I0130 14:21:55.637048 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa5a76da-05ef-4313-b8e8-abf8bc713cb3-calico-apiserver-certs\") pod \"calico-apiserver-5d9cd77888-5f95g\" (UID: \"aa5a76da-05ef-4313-b8e8-abf8bc713cb3\") " pod="calico-apiserver/calico-apiserver-5d9cd77888-5f95g" Jan 30 14:21:55.637203 kubelet[3054]: I0130 14:21:55.637095 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d71b019-e1d6-49af-8cc1-c191b20fbc5e-config-volume\") pod \"coredns-668d6bf9bc-d56fg\" (UID: \"9d71b019-e1d6-49af-8cc1-c191b20fbc5e\") " pod="kube-system/coredns-668d6bf9bc-d56fg" Jan 30 14:21:55.637703 kubelet[3054]: I0130 14:21:55.637134 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldpk\" (UniqueName: \"kubernetes.io/projected/9d71b019-e1d6-49af-8cc1-c191b20fbc5e-kube-api-access-wldpk\") pod \"coredns-668d6bf9bc-d56fg\" (UID: \"9d71b019-e1d6-49af-8cc1-c191b20fbc5e\") " pod="kube-system/coredns-668d6bf9bc-d56fg" Jan 30 14:21:55.637703 kubelet[3054]: I0130 14:21:55.637245 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjb8\" (UniqueName: \"kubernetes.io/projected/aa5a76da-05ef-4313-b8e8-abf8bc713cb3-kube-api-access-rjjb8\") pod \"calico-apiserver-5d9cd77888-5f95g\" (UID: \"aa5a76da-05ef-4313-b8e8-abf8bc713cb3\") " pod="calico-apiserver/calico-apiserver-5d9cd77888-5f95g" Jan 30 14:21:55.637703 kubelet[3054]: I0130 14:21:55.637357 3054 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a9af570-d877-46a4-8392-c4f16e337c47-config-volume\") pod \"coredns-668d6bf9bc-25zb4\" (UID: \"4a9af570-d877-46a4-8392-c4f16e337c47\") " pod="kube-system/coredns-668d6bf9bc-25zb4" Jan 30 14:21:55.834891 containerd[1802]: time="2025-01-30T14:21:55.834624054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d49bff-mr47g,Uid:e37efeb5-45cf-4c20-9a68-83d461b1575b,Namespace:calico-system,Attempt:0,}" Jan 30 14:21:55.840160 containerd[1802]: time="2025-01-30T14:21:55.840051793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-25zb4,Uid:4a9af570-d877-46a4-8392-c4f16e337c47,Namespace:kube-system,Attempt:0,}" Jan 30 14:21:55.846339 containerd[1802]: time="2025-01-30T14:21:55.846221095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d56fg,Uid:9d71b019-e1d6-49af-8cc1-c191b20fbc5e,Namespace:kube-system,Attempt:0,}" Jan 30 14:21:55.851461 containerd[1802]: time="2025-01-30T14:21:55.851345425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-5f95g,Uid:aa5a76da-05ef-4313-b8e8-abf8bc713cb3,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:21:55.854727 containerd[1802]: time="2025-01-30T14:21:55.854597445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-nfhxt,Uid:dd528602-7e46-432f-8601-8c9ecb2abf83,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:21:56.054879 containerd[1802]: time="2025-01-30T14:21:56.054791186Z" level=info msg="shim disconnected" id=a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8 namespace=k8s.io Jan 30 14:21:56.054879 containerd[1802]: time="2025-01-30T14:21:56.054856284Z" level=warning msg="cleaning up after shim disconnected" id=a83b6a00cbfd319c6ff591dcf032c1491b760490c2c2e61bb45c9b65b121aff8 namespace=k8s.io Jan 30 14:21:56.054879 containerd[1802]: time="2025-01-30T14:21:56.054862285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:21:56.094617 containerd[1802]: time="2025-01-30T14:21:56.094538742Z" level=error msg="Failed to destroy network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.094876 containerd[1802]: time="2025-01-30T14:21:56.094806586Z" level=error msg="encountered an error cleaning up failed sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.094876 containerd[1802]: time="2025-01-30T14:21:56.094856639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-5f95g,Uid:aa5a76da-05ef-4313-b8e8-abf8bc713cb3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.095031 kubelet[3054]: E0130 14:21:56.095004 3054 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.095067 kubelet[3054]: E0130 14:21:56.095053 3054 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cd77888-5f95g" Jan 30 14:21:56.095088 kubelet[3054]: E0130 14:21:56.095068 3054 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cd77888-5f95g" Jan 30 14:21:56.095116 kubelet[3054]: E0130 14:21:56.095097 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cd77888-5f95g_calico-apiserver(aa5a76da-05ef-4313-b8e8-abf8bc713cb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cd77888-5f95g_calico-apiserver(aa5a76da-05ef-4313-b8e8-abf8bc713cb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cd77888-5f95g" podUID="aa5a76da-05ef-4313-b8e8-abf8bc713cb3" Jan 30 14:21:56.095750 containerd[1802]: time="2025-01-30T14:21:56.095722053Z" level=error msg="Failed to destroy network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.095957 containerd[1802]: time="2025-01-30T14:21:56.095940139Z" level=error msg="encountered an error cleaning up failed sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.095993 containerd[1802]: time="2025-01-30T14:21:56.095978198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-25zb4,Uid:4a9af570-d877-46a4-8392-c4f16e337c47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096039 containerd[1802]: time="2025-01-30T14:21:56.096025541Z" level=error msg="Failed to destroy network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096094 kubelet[3054]: E0130 14:21:56.096080 3054 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096117 kubelet[3054]: E0130 14:21:56.096104 3054 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-25zb4" Jan 30 14:21:56.096138 kubelet[3054]: E0130 14:21:56.096116 3054 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-25zb4" Jan 30 14:21:56.096157 kubelet[3054]: E0130 14:21:56.096148 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-25zb4_kube-system(4a9af570-d877-46a4-8392-c4f16e337c47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-25zb4_kube-system(4a9af570-d877-46a4-8392-c4f16e337c47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-25zb4" podUID="4a9af570-d877-46a4-8392-c4f16e337c47" Jan 30 14:21:56.096194 containerd[1802]: time="2025-01-30T14:21:56.096171877Z" level=error msg="encountered an error cleaning up failed sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096219 containerd[1802]: time="2025-01-30T14:21:56.096192296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d56fg,Uid:9d71b019-e1d6-49af-8cc1-c191b20fbc5e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096253 containerd[1802]: time="2025-01-30T14:21:56.096210626Z" level=error msg="Failed to destroy network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096253 containerd[1802]: time="2025-01-30T14:21:56.096235946Z" level=error msg="Failed to destroy network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096332 kubelet[3054]: E0130 14:21:56.096254 3054 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096332 kubelet[3054]: E0130 14:21:56.096276 3054 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d56fg" Jan 30 14:21:56.096332 kubelet[3054]: E0130 14:21:56.096288 3054 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d56fg" Jan 30 14:21:56.096428 kubelet[3054]: E0130 14:21:56.096313 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d56fg_kube-system(9d71b019-e1d6-49af-8cc1-c191b20fbc5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d56fg_kube-system(9d71b019-e1d6-49af-8cc1-c191b20fbc5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d56fg" podUID="9d71b019-e1d6-49af-8cc1-c191b20fbc5e" Jan 30 14:21:56.096517 containerd[1802]: time="2025-01-30T14:21:56.096362918Z" level=error msg="encountered an error cleaning up failed sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096517 containerd[1802]: time="2025-01-30T14:21:56.096391650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-nfhxt,Uid:dd528602-7e46-432f-8601-8c9ecb2abf83,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096517 containerd[1802]: time="2025-01-30T14:21:56.096433781Z" level=error msg="encountered an error cleaning up failed sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096517 containerd[1802]: time="2025-01-30T14:21:56.096481368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d49bff-mr47g,Uid:e37efeb5-45cf-4c20-9a68-83d461b1575b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096651 kubelet[3054]: E0130 14:21:56.096445 3054 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096651 kubelet[3054]: E0130 14:21:56.096481 3054 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cd77888-nfhxt" Jan 30 14:21:56.096651 kubelet[3054]: E0130 14:21:56.096506 3054 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cd77888-nfhxt" Jan 30 14:21:56.096708 kubelet[3054]: E0130 14:21:56.096537 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cd77888-nfhxt_calico-apiserver(dd528602-7e46-432f-8601-8c9ecb2abf83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cd77888-nfhxt_calico-apiserver(dd528602-7e46-432f-8601-8c9ecb2abf83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cd77888-nfhxt" podUID="dd528602-7e46-432f-8601-8c9ecb2abf83" Jan 30 14:21:56.096708 kubelet[3054]: E0130 14:21:56.096587 3054 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.096708 kubelet[3054]: E0130 14:21:56.096605 3054 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb7d49bff-mr47g" Jan 30 14:21:56.096774 kubelet[3054]: E0130 14:21:56.096613 3054 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb7d49bff-mr47g" Jan 30 14:21:56.096774 kubelet[3054]: E0130 14:21:56.096629 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb7d49bff-mr47g_calico-system(e37efeb5-45cf-4c20-9a68-83d461b1575b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb7d49bff-mr47g_calico-system(e37efeb5-45cf-4c20-9a68-83d461b1575b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb7d49bff-mr47g" podUID="e37efeb5-45cf-4c20-9a68-83d461b1575b" Jan 30 14:21:56.343189 kubelet[3054]: I0130 14:21:56.343172 3054 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:21:56.343461 containerd[1802]: time="2025-01-30T14:21:56.343444598Z" level=info msg="StopPodSandbox for \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\"" Jan 30 14:21:56.343552 containerd[1802]: time="2025-01-30T14:21:56.343540411Z" level=info msg="Ensure that sandbox c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad in task-service has been cleanup successfully" Jan 30 14:21:56.343610 kubelet[3054]: I0130 14:21:56.343602 3054 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:21:56.343818 containerd[1802]: time="2025-01-30T14:21:56.343806417Z" level=info msg="StopPodSandbox for \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\"" Jan 30 14:21:56.343898 containerd[1802]: time="2025-01-30T14:21:56.343887950Z" level=info msg="Ensure that sandbox a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e in task-service has been cleanup successfully" Jan 30 14:21:56.344027 kubelet[3054]: I0130 14:21:56.344019 3054 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:21:56.344238 containerd[1802]: time="2025-01-30T14:21:56.344222372Z" level=info msg="StopPodSandbox for \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\"" Jan 30 14:21:56.344350 containerd[1802]: time="2025-01-30T14:21:56.344335982Z" level=info msg="Ensure that sandbox 93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50 in task-service has been cleanup successfully" Jan 30 14:21:56.344459 kubelet[3054]: I0130 14:21:56.344450 3054 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:21:56.344687 containerd[1802]: time="2025-01-30T14:21:56.344643285Z" level=info msg="StopPodSandbox for \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\"" Jan 30 14:21:56.344745 containerd[1802]: time="2025-01-30T14:21:56.344733188Z" level=info msg="Ensure that sandbox 6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7 in task-service has been cleanup successfully" Jan 30 14:21:56.346030 kubelet[3054]: I0130 14:21:56.346010 3054 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:21:56.346177 containerd[1802]: time="2025-01-30T14:21:56.346092105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:21:56.346359 containerd[1802]: time="2025-01-30T14:21:56.346337702Z" level=info msg="StopPodSandbox for \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\"" Jan 30 14:21:56.346511 containerd[1802]: time="2025-01-30T14:21:56.346495660Z" level=info msg="Ensure that sandbox 5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9 in task-service has been cleanup successfully" Jan 30 14:21:56.358895 containerd[1802]: time="2025-01-30T14:21:56.358855843Z" level=error msg="StopPodSandbox for \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\" failed" error="failed to destroy network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.359016 kubelet[3054]: E0130 14:21:56.358991 3054 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:21:56.359061 kubelet[3054]: E0130 14:21:56.359035 3054 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e"} Jan 30 14:21:56.359088 kubelet[3054]: E0130 14:21:56.359073 3054 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d71b019-e1d6-49af-8cc1-c191b20fbc5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:21:56.359131 kubelet[3054]: E0130 14:21:56.359087 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d71b019-e1d6-49af-8cc1-c191b20fbc5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d56fg" podUID="9d71b019-e1d6-49af-8cc1-c191b20fbc5e" Jan 30 14:21:56.359953 containerd[1802]: time="2025-01-30T14:21:56.359931027Z" level=error msg="StopPodSandbox for \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\" failed" error="failed to destroy network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.360041 kubelet[3054]: E0130 14:21:56.360026 3054 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:21:56.360070 kubelet[3054]: E0130 14:21:56.360045 3054 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad"} Jan 30 14:21:56.360070 kubelet[3054]: E0130 14:21:56.360062 3054 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd528602-7e46-432f-8601-8c9ecb2abf83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:21:56.360149 kubelet[3054]: E0130 14:21:56.360073 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd528602-7e46-432f-8601-8c9ecb2abf83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cd77888-nfhxt" podUID="dd528602-7e46-432f-8601-8c9ecb2abf83" Jan 30 14:21:56.360212 containerd[1802]: time="2025-01-30T14:21:56.360195835Z" level=error msg="StopPodSandbox for \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\" failed" error="failed to destroy network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.360296 kubelet[3054]: E0130 14:21:56.360280 3054 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:21:56.360339 kubelet[3054]: E0130 14:21:56.360305 3054 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50"} Jan 30 14:21:56.360339 kubelet[3054]: E0130 14:21:56.360326 3054 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a9af570-d877-46a4-8392-c4f16e337c47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:21:56.360401 kubelet[3054]: E0130 14:21:56.360343 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a9af570-d877-46a4-8392-c4f16e337c47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-25zb4" podUID="4a9af570-d877-46a4-8392-c4f16e337c47" Jan 30 14:21:56.361528 containerd[1802]: time="2025-01-30T14:21:56.361485843Z" level=error msg="StopPodSandbox for \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\" failed" error="failed to destroy network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.361607 kubelet[3054]: E0130 14:21:56.361553 3054 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:21:56.361607 kubelet[3054]: E0130 14:21:56.361573 3054 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7"} Jan 30 14:21:56.361607 kubelet[3054]: E0130 14:21:56.361589 3054 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e37efeb5-45cf-4c20-9a68-83d461b1575b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:21:56.361607 kubelet[3054]: E0130 14:21:56.361599 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e37efeb5-45cf-4c20-9a68-83d461b1575b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb7d49bff-mr47g" podUID="e37efeb5-45cf-4c20-9a68-83d461b1575b" Jan 30 14:21:56.363337 containerd[1802]: time="2025-01-30T14:21:56.363320125Z" level=error msg="StopPodSandbox for \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\" failed" error="failed to destroy network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:56.363429 kubelet[3054]: E0130 14:21:56.363386 3054 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:21:56.363429 kubelet[3054]: E0130 14:21:56.363403 3054 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9"} Jan 30 14:21:56.363429 kubelet[3054]: E0130 14:21:56.363418 3054 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa5a76da-05ef-4313-b8e8-abf8bc713cb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:21:56.363515 kubelet[3054]: E0130 14:21:56.363430 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa5a76da-05ef-4313-b8e8-abf8bc713cb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cd77888-5f95g" podUID="aa5a76da-05ef-4313-b8e8-abf8bc713cb3" Jan 30 14:21:56.860777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad-shm.mount: Deactivated successfully. Jan 30 14:21:56.860890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9-shm.mount: Deactivated successfully. Jan 30 14:21:56.860965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e-shm.mount: Deactivated successfully. Jan 30 14:21:56.861035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7-shm.mount: Deactivated successfully. Jan 30 14:21:56.861101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50-shm.mount: Deactivated successfully. Jan 30 14:21:57.302187 systemd[1]: Created slice kubepods-besteffort-pod2025d343_9493_4be3_aac1_dde8efb093f7.slice - libcontainer container kubepods-besteffort-pod2025d343_9493_4be3_aac1_dde8efb093f7.slice. Jan 30 14:21:57.307764 containerd[1802]: time="2025-01-30T14:21:57.307646125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrdfq,Uid:2025d343-9493-4be3-aac1-dde8efb093f7,Namespace:calico-system,Attempt:0,}" Jan 30 14:21:57.340694 containerd[1802]: time="2025-01-30T14:21:57.340666108Z" level=error msg="Failed to destroy network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:57.340876 containerd[1802]: time="2025-01-30T14:21:57.340858868Z" level=error msg="encountered an error cleaning up failed sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:57.340932 containerd[1802]: time="2025-01-30T14:21:57.340919255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrdfq,Uid:2025d343-9493-4be3-aac1-dde8efb093f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:57.341102 kubelet[3054]: E0130 14:21:57.341079 3054 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:57.341259 kubelet[3054]: E0130 14:21:57.341124 3054 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrdfq" Jan 30 14:21:57.341259 kubelet[3054]: E0130 14:21:57.341138 3054 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrdfq" Jan 30 14:21:57.341259 kubelet[3054]: E0130 14:21:57.341165 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rrdfq_calico-system(2025d343-9493-4be3-aac1-dde8efb093f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rrdfq_calico-system(2025d343-9493-4be3-aac1-dde8efb093f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrdfq" podUID="2025d343-9493-4be3-aac1-dde8efb093f7" Jan 30 14:21:57.342091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79-shm.mount: Deactivated successfully. Jan 30 14:21:57.347467 kubelet[3054]: I0130 14:21:57.347458 3054 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:21:57.347771 containerd[1802]: time="2025-01-30T14:21:57.347756338Z" level=info msg="StopPodSandbox for \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\"" Jan 30 14:21:57.347899 containerd[1802]: time="2025-01-30T14:21:57.347886734Z" level=info msg="Ensure that sandbox 996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79 in task-service has been cleanup successfully" Jan 30 14:21:57.361096 containerd[1802]: time="2025-01-30T14:21:57.361069412Z" level=error msg="StopPodSandbox for \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\" failed" error="failed to destroy network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:21:57.361265 kubelet[3054]: E0130 14:21:57.361247 3054 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:21:57.361306 kubelet[3054]: E0130 14:21:57.361275 3054 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79"} Jan 30 14:21:57.361328 kubelet[3054]: E0130 14:21:57.361303 3054 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2025d343-9493-4be3-aac1-dde8efb093f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:21:57.361328 kubelet[3054]: E0130 14:21:57.361319 3054 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2025d343-9493-4be3-aac1-dde8efb093f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrdfq" podUID="2025d343-9493-4be3-aac1-dde8efb093f7" Jan 30 14:21:59.624313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929092746.mount: Deactivated successfully. Jan 30 14:21:59.647936 containerd[1802]: time="2025-01-30T14:21:59.647909392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:59.648137 containerd[1802]: time="2025-01-30T14:21:59.648113431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 14:21:59.648435 containerd[1802]: time="2025-01-30T14:21:59.648421018Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:59.649646 containerd[1802]: time="2025-01-30T14:21:59.649629227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:21:59.649900 containerd[1802]: time="2025-01-30T14:21:59.649886632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.303759394s" Jan 30 14:21:59.649939 containerd[1802]: time="2025-01-30T14:21:59.649900687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 14:21:59.653168 containerd[1802]: time="2025-01-30T14:21:59.653149966Z" level=info msg="CreateContainer within sandbox \"1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:21:59.658455 containerd[1802]: time="2025-01-30T14:21:59.658439811Z" level=info msg="CreateContainer within sandbox \"1a46abd05af77ef43b422849d7011ec5eb0e8fb145e9cca29fc00a133b5ade27\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"289452e57bd4510883f3a0cdaa7ead4cc8fde5d90011b5c0fac96778614715c4\"" Jan 30 14:21:59.658725 containerd[1802]: time="2025-01-30T14:21:59.658709777Z" level=info msg="StartContainer for \"289452e57bd4510883f3a0cdaa7ead4cc8fde5d90011b5c0fac96778614715c4\"" Jan 30 14:21:59.680822 systemd[1]: Started cri-containerd-289452e57bd4510883f3a0cdaa7ead4cc8fde5d90011b5c0fac96778614715c4.scope - libcontainer container 289452e57bd4510883f3a0cdaa7ead4cc8fde5d90011b5c0fac96778614715c4. Jan 30 14:21:59.731629 containerd[1802]: time="2025-01-30T14:21:59.731592444Z" level=info msg="StartContainer for \"289452e57bd4510883f3a0cdaa7ead4cc8fde5d90011b5c0fac96778614715c4\" returns successfully" Jan 30 14:21:59.807175 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:21:59.807229 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:22:00.389136 kubelet[3054]: I0130 14:22:00.388997 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vmdzk" podStartSLOduration=1.150813377 podStartE2EDuration="11.388962215s" podCreationTimestamp="2025-01-30 14:21:49 +0000 UTC" firstStartedPulling="2025-01-30 14:21:49.412106249 +0000 UTC m=+13.167918737" lastFinishedPulling="2025-01-30 14:21:59.650255087 +0000 UTC m=+23.406067575" observedRunningTime="2025-01-30 14:22:00.388218061 +0000 UTC m=+24.144030612" watchObservedRunningTime="2025-01-30 14:22:00.388962215 +0000 UTC m=+24.144774751" Jan 30 14:22:00.877600 kubelet[3054]: I0130 14:22:00.877488 3054 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:22:01.120373 kernel: bpftool[4691]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:22:01.266962 systemd-networkd[1600]: vxlan.calico: Link UP Jan 30 14:22:01.266965 systemd-networkd[1600]: vxlan.calico: Gained carrier Jan 30 14:22:01.359042 kubelet[3054]: I0130 14:22:01.358997 3054 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:22:02.578546 systemd-networkd[1600]: vxlan.calico: Gained IPv6LL Jan 30 14:22:03.512521 kubelet[3054]: I0130 14:22:03.512418 3054 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:22:07.288487 containerd[1802]: time="2025-01-30T14:22:07.288390747Z" level=info msg="StopPodSandbox for \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\"" Jan 30 14:22:07.289799 containerd[1802]: time="2025-01-30T14:22:07.288425974Z" level=info msg="StopPodSandbox for \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\"" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.353 [INFO][4892] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.354 [INFO][4892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" iface="eth0" netns="/var/run/netns/cni-dcffe63d-aea0-fb46-c340-13527a3793f4" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.354 [INFO][4892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" iface="eth0" netns="/var/run/netns/cni-dcffe63d-aea0-fb46-c340-13527a3793f4" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.354 [INFO][4892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" iface="eth0" netns="/var/run/netns/cni-dcffe63d-aea0-fb46-c340-13527a3793f4" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.354 [INFO][4892] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.354 [INFO][4892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.369 [INFO][4925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.369 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.369 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.373 [WARNING][4925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.373 [INFO][4925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.374 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:07.376793 containerd[1802]: 2025-01-30 14:22:07.375 [INFO][4892] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:07.377436 containerd[1802]: time="2025-01-30T14:22:07.376891935Z" level=info msg="TearDown network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\" successfully" Jan 30 14:22:07.377436 containerd[1802]: time="2025-01-30T14:22:07.376917600Z" level=info msg="StopPodSandbox for \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\" returns successfully" Jan 30 14:22:07.377436 containerd[1802]: time="2025-01-30T14:22:07.377388844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-5f95g,Uid:aa5a76da-05ef-4313-b8e8-abf8bc713cb3,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:22:07.378790 systemd[1]: run-netns-cni\x2ddcffe63d\x2daea0\x2dfb46\x2dc340\x2d13527a3793f4.mount: Deactivated successfully. Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.355 [INFO][4891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.355 [INFO][4891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" iface="eth0" netns="/var/run/netns/cni-e6686921-3dbe-3f93-f5d9-70edbbeeb003" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.355 [INFO][4891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" iface="eth0" netns="/var/run/netns/cni-e6686921-3dbe-3f93-f5d9-70edbbeeb003" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.355 [INFO][4891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" iface="eth0" netns="/var/run/netns/cni-e6686921-3dbe-3f93-f5d9-70edbbeeb003" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.355 [INFO][4891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.355 [INFO][4891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.370 [INFO][4926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.370 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.374 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.378 [WARNING][4926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.378 [INFO][4926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.378 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:07.380687 containerd[1802]: 2025-01-30 14:22:07.380 [INFO][4891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:07.380968 containerd[1802]: time="2025-01-30T14:22:07.380731751Z" level=info msg="TearDown network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\" successfully" Jan 30 14:22:07.380968 containerd[1802]: time="2025-01-30T14:22:07.380745541Z" level=info msg="StopPodSandbox for \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\" returns successfully" Jan 30 14:22:07.381092 containerd[1802]: time="2025-01-30T14:22:07.381079055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-nfhxt,Uid:dd528602-7e46-432f-8601-8c9ecb2abf83,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:22:07.382074 systemd[1]: run-netns-cni\x2de6686921\x2d3dbe\x2d3f93\x2df5d9\x2d70edbbeeb003.mount: Deactivated successfully. Jan 30 14:22:07.446795 systemd-networkd[1600]: cali58802ad25f7: Link UP Jan 30 14:22:07.446909 systemd-networkd[1600]: cali58802ad25f7: Gained carrier Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.414 [INFO][4955] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0 calico-apiserver-5d9cd77888- calico-apiserver aa5a76da-05ef-4313-b8e8-abf8bc713cb3 717 0 2025-01-30 14:21:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d9cd77888 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b3fea05ed8 calico-apiserver-5d9cd77888-5f95g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58802ad25f7 [] []}} ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.414 [INFO][4955] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.429 [INFO][4994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" HandleID="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.433 [INFO][4994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" HandleID="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b3fea05ed8", "pod":"calico-apiserver-5d9cd77888-5f95g", "timestamp":"2025-01-30 14:22:07.429086254 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b3fea05ed8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.433 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.433 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.433 [INFO][4994] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b3fea05ed8' Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.434 [INFO][4994] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.435 [INFO][4994] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.437 [INFO][4994] ipam/ipam.go 489: Trying affinity for 192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.439 [INFO][4994] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.440 [INFO][4994] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.440 [INFO][4994] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.192/26 handle="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.440 [INFO][4994] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315 Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.442 [INFO][4994] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.192/26 handle="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.445 [INFO][4994] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.193/26] block=192.168.59.192/26 handle="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.445 [INFO][4994] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.193/26] handle="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.445 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:07.451835 containerd[1802]: 2025-01-30 14:22:07.445 [INFO][4994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.193/26] IPv6=[] ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" HandleID="k8s-pod-network.dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.452278 containerd[1802]: 2025-01-30 14:22:07.445 [INFO][4955] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa5a76da-05ef-4313-b8e8-abf8bc713cb3", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"", Pod:"calico-apiserver-5d9cd77888-5f95g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58802ad25f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:07.452278 containerd[1802]: 2025-01-30 14:22:07.446 [INFO][4955] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.193/32] ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.452278 containerd[1802]: 2025-01-30 14:22:07.446 [INFO][4955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58802ad25f7 ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.452278 containerd[1802]: 2025-01-30 14:22:07.446 [INFO][4955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.452278 containerd[1802]: 2025-01-30 14:22:07.447 [INFO][4955] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa5a76da-05ef-4313-b8e8-abf8bc713cb3", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315", Pod:"calico-apiserver-5d9cd77888-5f95g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58802ad25f7", MAC:"82:05:95:8d:52:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:07.452278 containerd[1802]: 2025-01-30 14:22:07.450 [INFO][4955] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-5f95g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:07.460738 containerd[1802]: time="2025-01-30T14:22:07.460695584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:22:07.460738 containerd[1802]: time="2025-01-30T14:22:07.460727950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:22:07.460738 containerd[1802]: time="2025-01-30T14:22:07.460735404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:07.460845 containerd[1802]: time="2025-01-30T14:22:07.460780082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:07.479577 systemd[1]: Started cri-containerd-dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315.scope - libcontainer container dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315. Jan 30 14:22:07.502518 containerd[1802]: time="2025-01-30T14:22:07.502470666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-5f95g,Uid:aa5a76da-05ef-4313-b8e8-abf8bc713cb3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315\"" Jan 30 14:22:07.503137 containerd[1802]: time="2025-01-30T14:22:07.503125476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:22:07.558022 systemd-networkd[1600]: cali05e98e4162e: Link UP Jan 30 14:22:07.558335 systemd-networkd[1600]: cali05e98e4162e: Gained carrier Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.415 [INFO][4964] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0 calico-apiserver-5d9cd77888- calico-apiserver dd528602-7e46-432f-8601-8c9ecb2abf83 718 0 2025-01-30 14:21:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d9cd77888 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-b3fea05ed8 calico-apiserver-5d9cd77888-nfhxt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05e98e4162e [] []}} ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.415 [INFO][4964] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.429 [INFO][5001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" HandleID="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.433 [INFO][5001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" HandleID="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000391570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-b3fea05ed8", "pod":"calico-apiserver-5d9cd77888-nfhxt", "timestamp":"2025-01-30 14:22:07.429392724 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b3fea05ed8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.433 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.445 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.445 [INFO][5001] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b3fea05ed8' Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.535 [INFO][5001] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.539 [INFO][5001] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.542 [INFO][5001] ipam/ipam.go 489: Trying affinity for 192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.544 [INFO][5001] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.546 [INFO][5001] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.546 [INFO][5001] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.192/26 handle="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.547 [INFO][5001] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2 Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.550 [INFO][5001] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.192/26 handle="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.554 [INFO][5001] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.194/26] block=192.168.59.192/26 handle="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.554 [INFO][5001] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.194/26] handle="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.554 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:07.567983 containerd[1802]: 2025-01-30 14:22:07.555 [INFO][5001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.194/26] IPv6=[] ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" HandleID="k8s-pod-network.b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.568915 containerd[1802]: 2025-01-30 14:22:07.556 [INFO][4964] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd528602-7e46-432f-8601-8c9ecb2abf83", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"", Pod:"calico-apiserver-5d9cd77888-nfhxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05e98e4162e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:07.568915 containerd[1802]: 2025-01-30 14:22:07.556 [INFO][4964] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.194/32] ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.568915 containerd[1802]: 2025-01-30 14:22:07.556 [INFO][4964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05e98e4162e ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.568915 containerd[1802]: 2025-01-30 14:22:07.558 [INFO][4964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.568915 containerd[1802]: 2025-01-30 14:22:07.558 [INFO][4964] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd528602-7e46-432f-8601-8c9ecb2abf83", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2", Pod:"calico-apiserver-5d9cd77888-nfhxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05e98e4162e", MAC:"92:2b:51:a9:35:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:07.568915 containerd[1802]: 2025-01-30 14:22:07.566 [INFO][4964] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cd77888-nfhxt" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:07.580405 containerd[1802]: time="2025-01-30T14:22:07.580334202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:22:07.580405 containerd[1802]: time="2025-01-30T14:22:07.580365706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:22:07.580405 containerd[1802]: time="2025-01-30T14:22:07.580372805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:07.580524 containerd[1802]: time="2025-01-30T14:22:07.580441519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:07.597507 systemd[1]: Started cri-containerd-b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2.scope - libcontainer container b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2. Jan 30 14:22:07.623513 containerd[1802]: time="2025-01-30T14:22:07.623484898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cd77888-nfhxt,Uid:dd528602-7e46-432f-8601-8c9ecb2abf83,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2\"" Jan 30 14:22:08.288070 containerd[1802]: time="2025-01-30T14:22:08.287983193Z" level=info msg="StopPodSandbox for \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\"" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.363 [INFO][5157] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.363 [INFO][5157] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" iface="eth0" netns="/var/run/netns/cni-cc76aae0-2594-6e0e-ad8a-027b83223b3c" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.364 [INFO][5157] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" iface="eth0" netns="/var/run/netns/cni-cc76aae0-2594-6e0e-ad8a-027b83223b3c" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.364 [INFO][5157] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" iface="eth0" netns="/var/run/netns/cni-cc76aae0-2594-6e0e-ad8a-027b83223b3c" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.364 [INFO][5157] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.364 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.389 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.389 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.389 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.396 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.396 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.398 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:08.400720 containerd[1802]: 2025-01-30 14:22:08.399 [INFO][5157] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:08.401744 containerd[1802]: time="2025-01-30T14:22:08.400889036Z" level=info msg="TearDown network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\" successfully" Jan 30 14:22:08.401744 containerd[1802]: time="2025-01-30T14:22:08.400934456Z" level=info msg="StopPodSandbox for \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\" returns successfully" Jan 30 14:22:08.401744 containerd[1802]: time="2025-01-30T14:22:08.401642496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-25zb4,Uid:4a9af570-d877-46a4-8392-c4f16e337c47,Namespace:kube-system,Attempt:1,}" Jan 30 14:22:08.403044 systemd[1]: run-netns-cni\x2dcc76aae0\x2d2594\x2d6e0e\x2dad8a\x2d027b83223b3c.mount: Deactivated successfully. Jan 30 14:22:08.455229 systemd-networkd[1600]: cali312fe6b76af: Link UP Jan 30 14:22:08.455348 systemd-networkd[1600]: cali312fe6b76af: Gained carrier Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.421 [INFO][5192] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0 coredns-668d6bf9bc- kube-system 4a9af570-d877-46a4-8392-c4f16e337c47 733 0 2025-01-30 14:21:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b3fea05ed8 coredns-668d6bf9bc-25zb4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali312fe6b76af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.421 [INFO][5192] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.435 [INFO][5213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" HandleID="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.439 [INFO][5213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" HandleID="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000518e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b3fea05ed8", "pod":"coredns-668d6bf9bc-25zb4", "timestamp":"2025-01-30 14:22:08.435253028 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b3fea05ed8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.439 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.439 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.439 [INFO][5213] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b3fea05ed8' Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.441 [INFO][5213] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.442 [INFO][5213] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.445 [INFO][5213] ipam/ipam.go 489: Trying affinity for 192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.446 [INFO][5213] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.447 [INFO][5213] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.447 [INFO][5213] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.192/26 handle="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.448 [INFO][5213] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.450 [INFO][5213] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.192/26 handle="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.453 [INFO][5213] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.195/26] block=192.168.59.192/26 handle="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.453 [INFO][5213] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.195/26] handle="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.453 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:08.460641 containerd[1802]: 2025-01-30 14:22:08.453 [INFO][5213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.195/26] IPv6=[] ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" HandleID="k8s-pod-network.093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.461058 containerd[1802]: 2025-01-30 14:22:08.454 [INFO][5192] cni-plugin/k8s.go 386: Populated endpoint ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a9af570-d877-46a4-8392-c4f16e337c47", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"", Pod:"coredns-668d6bf9bc-25zb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali312fe6b76af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:08.461058 containerd[1802]: 2025-01-30 14:22:08.454 [INFO][5192] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.195/32] ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.461058 containerd[1802]: 2025-01-30 14:22:08.454 [INFO][5192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali312fe6b76af ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.461058 containerd[1802]: 2025-01-30 14:22:08.455 [INFO][5192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.461058 containerd[1802]: 2025-01-30 14:22:08.455 [INFO][5192] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a9af570-d877-46a4-8392-c4f16e337c47", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f", Pod:"coredns-668d6bf9bc-25zb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali312fe6b76af", MAC:"02:d8:8e:6f:ff:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:08.461058 containerd[1802]: 2025-01-30 14:22:08.459 [INFO][5192] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-25zb4" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:08.483890 containerd[1802]: time="2025-01-30T14:22:08.483828126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:22:08.484084 containerd[1802]: time="2025-01-30T14:22:08.483884761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:22:08.484113 containerd[1802]: time="2025-01-30T14:22:08.484083905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:08.484141 containerd[1802]: time="2025-01-30T14:22:08.484130470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:08.509818 systemd[1]: Started cri-containerd-093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f.scope - libcontainer container 093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f. Jan 30 14:22:08.583307 containerd[1802]: time="2025-01-30T14:22:08.583270683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-25zb4,Uid:4a9af570-d877-46a4-8392-c4f16e337c47,Namespace:kube-system,Attempt:1,} returns sandbox id \"093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f\"" Jan 30 14:22:08.584955 containerd[1802]: time="2025-01-30T14:22:08.584934137Z" level=info msg="CreateContainer within sandbox \"093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:22:08.589837 containerd[1802]: time="2025-01-30T14:22:08.589821805Z" level=info msg="CreateContainer within sandbox \"093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2f977e417f50a5fe036da2c8484eed30a057571522cd852c226636e5f297085\"" Jan 30 14:22:08.590023 containerd[1802]: time="2025-01-30T14:22:08.590010006Z" level=info msg="StartContainer for \"e2f977e417f50a5fe036da2c8484eed30a057571522cd852c226636e5f297085\"" Jan 30 14:22:08.608544 systemd[1]: Started cri-containerd-e2f977e417f50a5fe036da2c8484eed30a057571522cd852c226636e5f297085.scope - libcontainer container e2f977e417f50a5fe036da2c8484eed30a057571522cd852c226636e5f297085. Jan 30 14:22:08.620118 containerd[1802]: time="2025-01-30T14:22:08.620092967Z" level=info msg="StartContainer for \"e2f977e417f50a5fe036da2c8484eed30a057571522cd852c226636e5f297085\" returns successfully" Jan 30 14:22:09.106430 systemd-networkd[1600]: cali58802ad25f7: Gained IPv6LL Jan 30 14:22:09.170396 systemd-networkd[1600]: cali05e98e4162e: Gained IPv6LL Jan 30 14:22:09.385847 kubelet[3054]: I0130 14:22:09.385773 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-25zb4" podStartSLOduration=26.385757843 podStartE2EDuration="26.385757843s" podCreationTimestamp="2025-01-30 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:22:09.385458493 +0000 UTC m=+33.141270981" watchObservedRunningTime="2025-01-30 14:22:09.385757843 +0000 UTC m=+33.141570328" Jan 30 14:22:09.386091 containerd[1802]: time="2025-01-30T14:22:09.385875435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:09.386121 containerd[1802]: time="2025-01-30T14:22:09.386090561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 14:22:09.386723 containerd[1802]: time="2025-01-30T14:22:09.386712118Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:09.387906 containerd[1802]: time="2025-01-30T14:22:09.387889005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:09.388417 containerd[1802]: time="2025-01-30T14:22:09.388401132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.885258112s" Jan 30 14:22:09.388465 containerd[1802]: time="2025-01-30T14:22:09.388418671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 14:22:09.389436 containerd[1802]: time="2025-01-30T14:22:09.389209914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:22:09.390231 containerd[1802]: time="2025-01-30T14:22:09.390214079Z" level=info msg="CreateContainer within sandbox \"dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:22:09.394590 containerd[1802]: time="2025-01-30T14:22:09.394565187Z" level=info msg="CreateContainer within sandbox \"dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf0630ce7fe9f56300c211197c40c49f842212b8f42aa6bfcdbbfa09726518d2\"" Jan 30 14:22:09.394877 containerd[1802]: time="2025-01-30T14:22:09.394866227Z" level=info msg="StartContainer for \"bf0630ce7fe9f56300c211197c40c49f842212b8f42aa6bfcdbbfa09726518d2\"" Jan 30 14:22:09.415455 systemd[1]: Started cri-containerd-bf0630ce7fe9f56300c211197c40c49f842212b8f42aa6bfcdbbfa09726518d2.scope - libcontainer container bf0630ce7fe9f56300c211197c40c49f842212b8f42aa6bfcdbbfa09726518d2. Jan 30 14:22:09.439140 containerd[1802]: time="2025-01-30T14:22:09.439119908Z" level=info msg="StartContainer for \"bf0630ce7fe9f56300c211197c40c49f842212b8f42aa6bfcdbbfa09726518d2\" returns successfully" Jan 30 14:22:09.554450 systemd-networkd[1600]: cali312fe6b76af: Gained IPv6LL Jan 30 14:22:09.800828 containerd[1802]: time="2025-01-30T14:22:09.800804173Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:09.801044 containerd[1802]: time="2025-01-30T14:22:09.801027363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:22:09.802232 containerd[1802]: time="2025-01-30T14:22:09.802216820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 412.979536ms" Jan 30 14:22:09.802259 containerd[1802]: time="2025-01-30T14:22:09.802235037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 14:22:09.803210 containerd[1802]: time="2025-01-30T14:22:09.803198657Z" level=info msg="CreateContainer within sandbox \"b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:22:09.807019 containerd[1802]: time="2025-01-30T14:22:09.807005780Z" level=info msg="CreateContainer within sandbox \"b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e2efcb4fb74e1534af86a843e03377284660f0f2d46ea5ca3a313bd76be6af18\"" Jan 30 14:22:09.807245 containerd[1802]: time="2025-01-30T14:22:09.807232493Z" level=info msg="StartContainer for \"e2efcb4fb74e1534af86a843e03377284660f0f2d46ea5ca3a313bd76be6af18\"" Jan 30 14:22:09.833518 systemd[1]: Started cri-containerd-e2efcb4fb74e1534af86a843e03377284660f0f2d46ea5ca3a313bd76be6af18.scope - libcontainer container e2efcb4fb74e1534af86a843e03377284660f0f2d46ea5ca3a313bd76be6af18. Jan 30 14:22:09.860078 containerd[1802]: time="2025-01-30T14:22:09.860024119Z" level=info msg="StartContainer for \"e2efcb4fb74e1534af86a843e03377284660f0f2d46ea5ca3a313bd76be6af18\" returns successfully" Jan 30 14:22:10.392515 kubelet[3054]: I0130 14:22:10.392467 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d9cd77888-nfhxt" podStartSLOduration=20.214597419 podStartE2EDuration="22.392448581s" podCreationTimestamp="2025-01-30 14:21:48 +0000 UTC" firstStartedPulling="2025-01-30 14:22:07.624729051 +0000 UTC m=+31.380541539" lastFinishedPulling="2025-01-30 14:22:09.802580212 +0000 UTC m=+33.558392701" observedRunningTime="2025-01-30 14:22:10.391993651 +0000 UTC m=+34.147806145" watchObservedRunningTime="2025-01-30 14:22:10.392448581 +0000 UTC m=+34.148261072" Jan 30 14:22:10.400152 kubelet[3054]: I0130 14:22:10.399249 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d9cd77888-5f95g" podStartSLOduration=20.513208335 podStartE2EDuration="22.399058947s" podCreationTimestamp="2025-01-30 14:21:48 +0000 UTC" firstStartedPulling="2025-01-30 14:22:07.503009268 +0000 UTC m=+31.258821754" lastFinishedPulling="2025-01-30 14:22:09.388859878 +0000 UTC m=+33.144672366" observedRunningTime="2025-01-30 14:22:10.39866713 +0000 UTC m=+34.154479622" watchObservedRunningTime="2025-01-30 14:22:10.399058947 +0000 UTC m=+34.154871435" Jan 30 14:22:11.288326 containerd[1802]: time="2025-01-30T14:22:11.288186700Z" level=info msg="StopPodSandbox for \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\"" Jan 30 14:22:11.288326 containerd[1802]: time="2025-01-30T14:22:11.288227623Z" level=info msg="StopPodSandbox for \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\"" Jan 30 14:22:11.289199 containerd[1802]: time="2025-01-30T14:22:11.288196180Z" level=info msg="StopPodSandbox for \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\"" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5496] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" iface="eth0" netns="/var/run/netns/cni-8b82eefc-8fc4-d6c6-1745-a080384c6977" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5496] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" iface="eth0" netns="/var/run/netns/cni-8b82eefc-8fc4-d6c6-1745-a080384c6977" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5496] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" iface="eth0" netns="/var/run/netns/cni-8b82eefc-8fc4-d6c6-1745-a080384c6977" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.327 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.327 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.327 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.330 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.330 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.331 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:11.332884 containerd[1802]: 2025-01-30 14:22:11.332 [INFO][5496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:11.333155 containerd[1802]: time="2025-01-30T14:22:11.332937620Z" level=info msg="TearDown network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\" successfully" Jan 30 14:22:11.333155 containerd[1802]: time="2025-01-30T14:22:11.332955484Z" level=info msg="StopPodSandbox for \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\" returns successfully" Jan 30 14:22:11.333370 containerd[1802]: time="2025-01-30T14:22:11.333314891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d56fg,Uid:9d71b019-e1d6-49af-8cc1-c191b20fbc5e,Namespace:kube-system,Attempt:1,}" Jan 30 14:22:11.335892 systemd[1]: run-netns-cni\x2d8b82eefc\x2d8fc4\x2dd6c6\x2d1745\x2da080384c6977.mount: Deactivated successfully. Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5497] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" iface="eth0" netns="/var/run/netns/cni-d796d9a6-3437-6ef4-29b8-aa55dcd0a571" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" iface="eth0" netns="/var/run/netns/cni-d796d9a6-3437-6ef4-29b8-aa55dcd0a571" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" iface="eth0" netns="/var/run/netns/cni-d796d9a6-3437-6ef4-29b8-aa55dcd0a571" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5497] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.327 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.327 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.331 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.335 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.335 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.335 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:11.337166 containerd[1802]: 2025-01-30 14:22:11.336 [INFO][5497] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:11.337413 containerd[1802]: time="2025-01-30T14:22:11.337217446Z" level=info msg="TearDown network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\" successfully" Jan 30 14:22:11.337413 containerd[1802]: time="2025-01-30T14:22:11.337229239Z" level=info msg="StopPodSandbox for \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\" returns successfully" Jan 30 14:22:11.337681 containerd[1802]: time="2025-01-30T14:22:11.337647907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d49bff-mr47g,Uid:e37efeb5-45cf-4c20-9a68-83d461b1575b,Namespace:calico-system,Attempt:1,}" Jan 30 14:22:11.340743 systemd[1]: run-netns-cni\x2dd796d9a6\x2d3437\x2d6ef4\x2d29b8\x2daa55dcd0a571.mount: Deactivated successfully. Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.316 [INFO][5495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" iface="eth0" netns="/var/run/netns/cni-680494e2-a9a7-9d7b-a338-0446b0a2bd1a" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" iface="eth0" netns="/var/run/netns/cni-680494e2-a9a7-9d7b-a338-0446b0a2bd1a" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" iface="eth0" netns="/var/run/netns/cni-680494e2-a9a7-9d7b-a338-0446b0a2bd1a" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.317 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.327 [INFO][5546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.327 [INFO][5546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.336 [INFO][5546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.339 [WARNING][5546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.339 [INFO][5546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.340 [INFO][5546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:11.342503 containerd[1802]: 2025-01-30 14:22:11.341 [INFO][5495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:11.342813 containerd[1802]: time="2025-01-30T14:22:11.342641006Z" level=info msg="TearDown network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\" successfully" Jan 30 14:22:11.342813 containerd[1802]: time="2025-01-30T14:22:11.342662295Z" level=info msg="StopPodSandbox for \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\" returns successfully" Jan 30 14:22:11.343083 containerd[1802]: time="2025-01-30T14:22:11.343069239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrdfq,Uid:2025d343-9493-4be3-aac1-dde8efb093f7,Namespace:calico-system,Attempt:1,}" Jan 30 14:22:11.348553 systemd[1]: run-netns-cni\x2d680494e2\x2da9a7\x2d9d7b\x2da338\x2d0446b0a2bd1a.mount: Deactivated successfully. Jan 30 14:22:11.384870 kubelet[3054]: I0130 14:22:11.384855 3054 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:22:11.384870 kubelet[3054]: I0130 14:22:11.384855 3054 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:22:11.389767 systemd-networkd[1600]: cali613a2080931: Link UP Jan 30 14:22:11.389872 systemd-networkd[1600]: cali613a2080931: Gained carrier Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.355 [INFO][5590] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0 coredns-668d6bf9bc- kube-system 9d71b019-e1d6-49af-8cc1-c191b20fbc5e 768 0 2025-01-30 14:21:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-b3fea05ed8 coredns-668d6bf9bc-d56fg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali613a2080931 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.355 [INFO][5590] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.371 [INFO][5656] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" HandleID="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.375 [INFO][5656] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" HandleID="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003658c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-b3fea05ed8", "pod":"coredns-668d6bf9bc-d56fg", "timestamp":"2025-01-30 14:22:11.371283927 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b3fea05ed8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.375 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.375 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.375 [INFO][5656] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b3fea05ed8' Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.376 [INFO][5656] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.378 [INFO][5656] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.380 [INFO][5656] ipam/ipam.go 489: Trying affinity for 192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.381 [INFO][5656] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.383 [INFO][5656] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.383 [INFO][5656] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.192/26 handle="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.383 [INFO][5656] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2 Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.385 [INFO][5656] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.192/26 handle="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.388 [INFO][5656] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.196/26] block=192.168.59.192/26 handle="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.388 [INFO][5656] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.196/26] handle="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.388 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:11.394645 containerd[1802]: 2025-01-30 14:22:11.388 [INFO][5656] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.196/26] IPv6=[] ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" HandleID="k8s-pod-network.469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.395041 containerd[1802]: 2025-01-30 14:22:11.388 [INFO][5590] cni-plugin/k8s.go 386: Populated endpoint ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d71b019-e1d6-49af-8cc1-c191b20fbc5e", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"", Pod:"coredns-668d6bf9bc-d56fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali613a2080931", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:11.395041 containerd[1802]: 2025-01-30 14:22:11.389 [INFO][5590] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.196/32] ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.395041 containerd[1802]: 2025-01-30 14:22:11.389 [INFO][5590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali613a2080931 ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.395041 containerd[1802]: 2025-01-30 14:22:11.389 [INFO][5590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.395041 containerd[1802]: 2025-01-30 14:22:11.390 [INFO][5590] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d71b019-e1d6-49af-8cc1-c191b20fbc5e", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2", Pod:"coredns-668d6bf9bc-d56fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali613a2080931", MAC:"4a:5f:6d:88:de:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:11.395041 containerd[1802]: 2025-01-30 14:22:11.393 [INFO][5590] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2" Namespace="kube-system" Pod="coredns-668d6bf9bc-d56fg" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:11.404141 containerd[1802]: time="2025-01-30T14:22:11.404050920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:22:11.404303 containerd[1802]: time="2025-01-30T14:22:11.404285824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:22:11.404358 containerd[1802]: time="2025-01-30T14:22:11.404296666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:11.404417 containerd[1802]: time="2025-01-30T14:22:11.404362813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:11.429435 systemd[1]: Started cri-containerd-469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2.scope - libcontainer container 469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2. Jan 30 14:22:11.453474 containerd[1802]: time="2025-01-30T14:22:11.453416016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d56fg,Uid:9d71b019-e1d6-49af-8cc1-c191b20fbc5e,Namespace:kube-system,Attempt:1,} returns sandbox id \"469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2\"" Jan 30 14:22:11.454648 containerd[1802]: time="2025-01-30T14:22:11.454634813Z" level=info msg="CreateContainer within sandbox \"469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:22:11.459094 containerd[1802]: time="2025-01-30T14:22:11.459043221Z" level=info msg="CreateContainer within sandbox \"469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ab86995efba51171699b26f8f40b23163e12958a880b2f35fcc082d9ae3c500\"" Jan 30 14:22:11.459350 containerd[1802]: time="2025-01-30T14:22:11.459308399Z" level=info msg="StartContainer for \"2ab86995efba51171699b26f8f40b23163e12958a880b2f35fcc082d9ae3c500\"" Jan 30 14:22:11.483769 systemd[1]: Started cri-containerd-2ab86995efba51171699b26f8f40b23163e12958a880b2f35fcc082d9ae3c500.scope - libcontainer container 2ab86995efba51171699b26f8f40b23163e12958a880b2f35fcc082d9ae3c500. Jan 30 14:22:11.533213 containerd[1802]: time="2025-01-30T14:22:11.533164463Z" level=info msg="StartContainer for \"2ab86995efba51171699b26f8f40b23163e12958a880b2f35fcc082d9ae3c500\" returns successfully" Jan 30 14:22:11.533271 systemd-networkd[1600]: cali59dfc872637: Link UP Jan 30 14:22:11.533638 systemd-networkd[1600]: cali59dfc872637: Gained carrier Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.358 [INFO][5600] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0 calico-kube-controllers-6bb7d49bff- calico-system e37efeb5-45cf-4c20-9a68-83d461b1575b 769 0 2025-01-30 14:21:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bb7d49bff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-b3fea05ed8 calico-kube-controllers-6bb7d49bff-mr47g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali59dfc872637 [] []}} ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.358 [INFO][5600] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.372 [INFO][5661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" HandleID="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.376 [INFO][5661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" HandleID="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051560), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b3fea05ed8", "pod":"calico-kube-controllers-6bb7d49bff-mr47g", "timestamp":"2025-01-30 14:22:11.372375704 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b3fea05ed8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.376 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.388 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.388 [INFO][5661] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b3fea05ed8' Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.478 [INFO][5661] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.488 [INFO][5661] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.497 [INFO][5661] ipam/ipam.go 489: Trying affinity for 192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.501 [INFO][5661] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.508 [INFO][5661] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.508 [INFO][5661] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.192/26 handle="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.511 [INFO][5661] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452 Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.520 [INFO][5661] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.192/26 handle="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.529 [INFO][5661] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.197/26] block=192.168.59.192/26 handle="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.529 [INFO][5661] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.197/26] handle="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.529 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:11.543210 containerd[1802]: 2025-01-30 14:22:11.529 [INFO][5661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.197/26] IPv6=[] ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" HandleID="k8s-pod-network.477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.543860 containerd[1802]: 2025-01-30 14:22:11.531 [INFO][5600] cni-plugin/k8s.go 386: Populated endpoint ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0", GenerateName:"calico-kube-controllers-6bb7d49bff-", Namespace:"calico-system", SelfLink:"", UID:"e37efeb5-45cf-4c20-9a68-83d461b1575b", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"", Pod:"calico-kube-controllers-6bb7d49bff-mr47g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali59dfc872637", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:11.543860 containerd[1802]: 2025-01-30 14:22:11.531 [INFO][5600] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.197/32] ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.543860 containerd[1802]: 2025-01-30 14:22:11.531 [INFO][5600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59dfc872637 ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.543860 containerd[1802]: 2025-01-30 14:22:11.533 [INFO][5600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.543860 containerd[1802]: 2025-01-30 14:22:11.533 [INFO][5600] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0", GenerateName:"calico-kube-controllers-6bb7d49bff-", Namespace:"calico-system", SelfLink:"", UID:"e37efeb5-45cf-4c20-9a68-83d461b1575b", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452", Pod:"calico-kube-controllers-6bb7d49bff-mr47g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali59dfc872637", MAC:"de:c2:87:51:2b:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:11.543860 containerd[1802]: 2025-01-30 14:22:11.541 [INFO][5600] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d49bff-mr47g" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:11.556297 containerd[1802]: time="2025-01-30T14:22:11.556228937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:22:11.556297 containerd[1802]: time="2025-01-30T14:22:11.556258086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:22:11.556297 containerd[1802]: time="2025-01-30T14:22:11.556265137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:11.556429 containerd[1802]: time="2025-01-30T14:22:11.556315123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:11.575431 systemd[1]: Started cri-containerd-477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452.scope - libcontainer container 477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452. Jan 30 14:22:11.597199 containerd[1802]: time="2025-01-30T14:22:11.597174932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d49bff-mr47g,Uid:e37efeb5-45cf-4c20-9a68-83d461b1575b,Namespace:calico-system,Attempt:1,} returns sandbox id \"477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452\"" Jan 30 14:22:11.597845 containerd[1802]: time="2025-01-30T14:22:11.597824981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:22:11.602744 systemd-networkd[1600]: cali5c7033dfeb8: Link UP Jan 30 14:22:11.602872 systemd-networkd[1600]: cali5c7033dfeb8: Gained carrier Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.365 [INFO][5627] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0 csi-node-driver- calico-system 2025d343-9493-4be3-aac1-dde8efb093f7 770 0 2025-01-30 14:21:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-b3fea05ed8 csi-node-driver-rrdfq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c7033dfeb8 [] []}} ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.366 [INFO][5627] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.380 [INFO][5679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" HandleID="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.384 [INFO][5679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" HandleID="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002195e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-b3fea05ed8", "pod":"csi-node-driver-rrdfq", "timestamp":"2025-01-30 14:22:11.380204697 +0000 UTC"}, Hostname:"ci-4081.3.0-a-b3fea05ed8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.384 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.529 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.529 [INFO][5679] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-b3fea05ed8' Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.577 [INFO][5679] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.586 [INFO][5679] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.593 [INFO][5679] ipam/ipam.go 489: Trying affinity for 192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.593 [INFO][5679] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.595 [INFO][5679] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.192/26 host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.595 [INFO][5679] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.192/26 handle="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.596 [INFO][5679] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945 Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.598 [INFO][5679] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.192/26 handle="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.601 [INFO][5679] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.198/26] block=192.168.59.192/26 handle="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.601 [INFO][5679] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.198/26] handle="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" host="ci-4081.3.0-a-b3fea05ed8" Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.601 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:11.608604 containerd[1802]: 2025-01-30 14:22:11.601 [INFO][5679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.198/26] IPv6=[] ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" HandleID="k8s-pod-network.2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.609018 containerd[1802]: 2025-01-30 14:22:11.601 [INFO][5627] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025d343-9493-4be3-aac1-dde8efb093f7", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"", Pod:"csi-node-driver-rrdfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7033dfeb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:11.609018 containerd[1802]: 2025-01-30 14:22:11.602 [INFO][5627] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.198/32] ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.609018 containerd[1802]: 2025-01-30 14:22:11.602 [INFO][5627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c7033dfeb8 ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.609018 containerd[1802]: 2025-01-30 14:22:11.602 [INFO][5627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.609018 containerd[1802]: 2025-01-30 14:22:11.602 [INFO][5627] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025d343-9493-4be3-aac1-dde8efb093f7", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945", Pod:"csi-node-driver-rrdfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7033dfeb8", MAC:"8e:86:9b:62:38:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:11.609018 containerd[1802]: 2025-01-30 14:22:11.607 [INFO][5627] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945" Namespace="calico-system" Pod="csi-node-driver-rrdfq" WorkloadEndpoint="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:11.617592 containerd[1802]: time="2025-01-30T14:22:11.617522092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:22:11.617592 containerd[1802]: time="2025-01-30T14:22:11.617551546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:22:11.617592 containerd[1802]: time="2025-01-30T14:22:11.617558605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:11.617703 containerd[1802]: time="2025-01-30T14:22:11.617598073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:22:11.640510 systemd[1]: Started cri-containerd-2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945.scope - libcontainer container 2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945. Jan 30 14:22:11.654800 containerd[1802]: time="2025-01-30T14:22:11.654777233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrdfq,Uid:2025d343-9493-4be3-aac1-dde8efb093f7,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945\"" Jan 30 14:22:12.393646 kubelet[3054]: I0130 14:22:12.393612 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d56fg" podStartSLOduration=29.393600725 podStartE2EDuration="29.393600725s" podCreationTimestamp="2025-01-30 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:22:12.393348811 +0000 UTC m=+36.149161303" watchObservedRunningTime="2025-01-30 14:22:12.393600725 +0000 UTC m=+36.149413211" Jan 30 14:22:12.690636 systemd-networkd[1600]: cali5c7033dfeb8: Gained IPv6LL Jan 30 14:22:13.010401 systemd-networkd[1600]: cali59dfc872637: Gained IPv6LL Jan 30 14:22:13.138431 systemd-networkd[1600]: cali613a2080931: Gained IPv6LL Jan 30 14:22:13.487259 containerd[1802]: time="2025-01-30T14:22:13.487234278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:13.487501 containerd[1802]: time="2025-01-30T14:22:13.487414726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 14:22:13.487949 containerd[1802]: time="2025-01-30T14:22:13.487903196Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:13.489051 containerd[1802]: time="2025-01-30T14:22:13.489032948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:13.489475 containerd[1802]: time="2025-01-30T14:22:13.489422173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.891580865s" Jan 30 14:22:13.489475 containerd[1802]: time="2025-01-30T14:22:13.489438136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 14:22:13.490049 containerd[1802]: time="2025-01-30T14:22:13.490036114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:22:13.493069 containerd[1802]: time="2025-01-30T14:22:13.493049784Z" level=info msg="CreateContainer within sandbox \"477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:22:13.497641 containerd[1802]: time="2025-01-30T14:22:13.497596144Z" level=info msg="CreateContainer within sandbox \"477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b80d1bf61d83c07ed03e096a853fe7c754ee9f62607f7f4a57814e9f6dc504e6\"" Jan 30 14:22:13.497828 containerd[1802]: time="2025-01-30T14:22:13.497817025Z" level=info msg="StartContainer for \"b80d1bf61d83c07ed03e096a853fe7c754ee9f62607f7f4a57814e9f6dc504e6\"" Jan 30 14:22:13.526756 systemd[1]: Started cri-containerd-b80d1bf61d83c07ed03e096a853fe7c754ee9f62607f7f4a57814e9f6dc504e6.scope - libcontainer container b80d1bf61d83c07ed03e096a853fe7c754ee9f62607f7f4a57814e9f6dc504e6. Jan 30 14:22:13.585884 containerd[1802]: time="2025-01-30T14:22:13.585835954Z" level=info msg="StartContainer for \"b80d1bf61d83c07ed03e096a853fe7c754ee9f62607f7f4a57814e9f6dc504e6\" returns successfully" Jan 30 14:22:14.419867 kubelet[3054]: I0130 14:22:14.419827 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bb7d49bff-mr47g" podStartSLOduration=23.52755214 podStartE2EDuration="25.41981379s" podCreationTimestamp="2025-01-30 14:21:49 +0000 UTC" firstStartedPulling="2025-01-30 14:22:11.597713979 +0000 UTC m=+35.353526464" lastFinishedPulling="2025-01-30 14:22:13.489975622 +0000 UTC m=+37.245788114" observedRunningTime="2025-01-30 14:22:14.419533182 +0000 UTC m=+38.175345673" watchObservedRunningTime="2025-01-30 14:22:14.41981379 +0000 UTC m=+38.175626276" Jan 30 14:22:14.775061 containerd[1802]: time="2025-01-30T14:22:14.775009967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:14.775291 containerd[1802]: time="2025-01-30T14:22:14.775177720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 14:22:14.775643 containerd[1802]: time="2025-01-30T14:22:14.775601132Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:14.776653 containerd[1802]: time="2025-01-30T14:22:14.776612738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:14.777037 containerd[1802]: time="2025-01-30T14:22:14.776996737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.286943308s" Jan 30 14:22:14.777037 containerd[1802]: time="2025-01-30T14:22:14.777012145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 14:22:14.778093 containerd[1802]: time="2025-01-30T14:22:14.778079616Z" level=info msg="CreateContainer within sandbox \"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:22:14.783568 containerd[1802]: time="2025-01-30T14:22:14.783526595Z" level=info msg="CreateContainer within sandbox \"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"646678234946b04de050a96fb00cb7bd5ffe80af2d29cb64b14b85822625714b\"" Jan 30 14:22:14.783834 containerd[1802]: time="2025-01-30T14:22:14.783780596Z" level=info msg="StartContainer for \"646678234946b04de050a96fb00cb7bd5ffe80af2d29cb64b14b85822625714b\"" Jan 30 14:22:14.808627 systemd[1]: Started cri-containerd-646678234946b04de050a96fb00cb7bd5ffe80af2d29cb64b14b85822625714b.scope - libcontainer container 646678234946b04de050a96fb00cb7bd5ffe80af2d29cb64b14b85822625714b. Jan 30 14:22:14.822140 containerd[1802]: time="2025-01-30T14:22:14.822117199Z" level=info msg="StartContainer for \"646678234946b04de050a96fb00cb7bd5ffe80af2d29cb64b14b85822625714b\" returns successfully" Jan 30 14:22:14.822764 containerd[1802]: time="2025-01-30T14:22:14.822751803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:22:16.175695 containerd[1802]: time="2025-01-30T14:22:16.175636188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:16.175909 containerd[1802]: time="2025-01-30T14:22:16.175872054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 14:22:16.176200 containerd[1802]: time="2025-01-30T14:22:16.176164980Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:16.177227 containerd[1802]: time="2025-01-30T14:22:16.177185511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:16.177719 containerd[1802]: time="2025-01-30T14:22:16.177678403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.354909165s" Jan 30 14:22:16.177719 containerd[1802]: time="2025-01-30T14:22:16.177694120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 14:22:16.178716 containerd[1802]: time="2025-01-30T14:22:16.178680680Z" level=info msg="CreateContainer within sandbox \"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:22:16.183872 containerd[1802]: time="2025-01-30T14:22:16.183854505Z" level=info msg="CreateContainer within sandbox \"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d4dc8536d11c2ed308153e2c310fcdb33f6e29520cdebf189a15987580befcac\"" Jan 30 14:22:16.184122 containerd[1802]: time="2025-01-30T14:22:16.184109110Z" level=info msg="StartContainer for \"d4dc8536d11c2ed308153e2c310fcdb33f6e29520cdebf189a15987580befcac\"" Jan 30 14:22:16.208600 systemd[1]: Started cri-containerd-d4dc8536d11c2ed308153e2c310fcdb33f6e29520cdebf189a15987580befcac.scope - libcontainer container d4dc8536d11c2ed308153e2c310fcdb33f6e29520cdebf189a15987580befcac. Jan 30 14:22:16.222113 containerd[1802]: time="2025-01-30T14:22:16.222091048Z" level=info msg="StartContainer for \"d4dc8536d11c2ed308153e2c310fcdb33f6e29520cdebf189a15987580befcac\" returns successfully" Jan 30 14:22:16.334627 kubelet[3054]: I0130 14:22:16.334566 3054 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:22:16.334627 kubelet[3054]: I0130 14:22:16.334637 3054 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:22:16.414361 kubelet[3054]: I0130 14:22:16.414291 3054 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rrdfq" podStartSLOduration=22.891614112 podStartE2EDuration="27.414272254s" podCreationTimestamp="2025-01-30 14:21:49 +0000 UTC" firstStartedPulling="2025-01-30 14:22:11.65539355 +0000 UTC m=+35.411206038" lastFinishedPulling="2025-01-30 14:22:16.178051693 +0000 UTC m=+39.933864180" observedRunningTime="2025-01-30 14:22:16.413983067 +0000 UTC m=+40.169795577" watchObservedRunningTime="2025-01-30 14:22:16.414272254 +0000 UTC m=+40.170084756" Jan 30 14:22:36.285042 containerd[1802]: time="2025-01-30T14:22:36.284858106Z" level=info msg="StopPodSandbox for \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\"" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.344 [WARNING][6157] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d71b019-e1d6-49af-8cc1-c191b20fbc5e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2", Pod:"coredns-668d6bf9bc-d56fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali613a2080931", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.345 [INFO][6157] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.345 [INFO][6157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" iface="eth0" netns="" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.345 [INFO][6157] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.345 [INFO][6157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.368 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.368 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.368 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.375 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.375 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.376 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.379345 containerd[1802]: 2025-01-30 14:22:36.378 [INFO][6157] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.380034 containerd[1802]: time="2025-01-30T14:22:36.379386756Z" level=info msg="TearDown network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\" successfully" Jan 30 14:22:36.380034 containerd[1802]: time="2025-01-30T14:22:36.379415452Z" level=info msg="StopPodSandbox for \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\" returns successfully" Jan 30 14:22:36.380034 containerd[1802]: time="2025-01-30T14:22:36.379925200Z" level=info msg="RemovePodSandbox for \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\"" Jan 30 14:22:36.380034 containerd[1802]: time="2025-01-30T14:22:36.379959456Z" level=info msg="Forcibly stopping sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\"" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.415 [WARNING][6205] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d71b019-e1d6-49af-8cc1-c191b20fbc5e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"469d9c1050bbdf790126b569451d31328ed91a6092fda4a5e2ce759af9e17ef2", Pod:"coredns-668d6bf9bc-d56fg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali613a2080931", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.415 [INFO][6205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.415 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" iface="eth0" netns="" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.415 [INFO][6205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.415 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.429 [INFO][6221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.429 [INFO][6221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.429 [INFO][6221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.434 [WARNING][6221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.434 [INFO][6221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" HandleID="k8s-pod-network.a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--d56fg-eth0" Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.435 [INFO][6221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.437214 containerd[1802]: 2025-01-30 14:22:36.436 [INFO][6205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e" Jan 30 14:22:36.437623 containerd[1802]: time="2025-01-30T14:22:36.437237546Z" level=info msg="TearDown network for sandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\" successfully" Jan 30 14:22:36.445342 containerd[1802]: time="2025-01-30T14:22:36.445278862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:22:36.445342 containerd[1802]: time="2025-01-30T14:22:36.445316193Z" level=info msg="RemovePodSandbox \"a7dcafd95f647997da7191d86918acf30eeb05c34ca10912356e757f905f252e\" returns successfully" Jan 30 14:22:36.445641 containerd[1802]: time="2025-01-30T14:22:36.445623558Z" level=info msg="StopPodSandbox for \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\"" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.464 [WARNING][6251] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0", GenerateName:"calico-kube-controllers-6bb7d49bff-", Namespace:"calico-system", SelfLink:"", UID:"e37efeb5-45cf-4c20-9a68-83d461b1575b", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452", Pod:"calico-kube-controllers-6bb7d49bff-mr47g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali59dfc872637", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.464 [INFO][6251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.464 [INFO][6251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" iface="eth0" netns="" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.464 [INFO][6251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.464 [INFO][6251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.474 [INFO][6265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.474 [INFO][6265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.474 [INFO][6265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.478 [WARNING][6265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.478 [INFO][6265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.479 [INFO][6265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.480660 containerd[1802]: 2025-01-30 14:22:36.479 [INFO][6251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.480660 containerd[1802]: time="2025-01-30T14:22:36.480641500Z" level=info msg="TearDown network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\" successfully" Jan 30 14:22:36.480660 containerd[1802]: time="2025-01-30T14:22:36.480656013Z" level=info msg="StopPodSandbox for \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\" returns successfully" Jan 30 14:22:36.480997 containerd[1802]: time="2025-01-30T14:22:36.480912698Z" level=info msg="RemovePodSandbox for \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\"" Jan 30 14:22:36.480997 containerd[1802]: time="2025-01-30T14:22:36.480929928Z" level=info msg="Forcibly stopping sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\"" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.500 [WARNING][6291] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0", GenerateName:"calico-kube-controllers-6bb7d49bff-", Namespace:"calico-system", SelfLink:"", UID:"e37efeb5-45cf-4c20-9a68-83d461b1575b", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"477240258f45d98152c5d2c57858be2f2caee2fa0b5497530f4614b95345a452", Pod:"calico-kube-controllers-6bb7d49bff-mr47g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali59dfc872637", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.500 [INFO][6291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.500 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" iface="eth0" netns="" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.500 [INFO][6291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.500 [INFO][6291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.511 [INFO][6305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.511 [INFO][6305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.511 [INFO][6305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.515 [WARNING][6305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.515 [INFO][6305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" HandleID="k8s-pod-network.6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--kube--controllers--6bb7d49bff--mr47g-eth0" Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.516 [INFO][6305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.518050 containerd[1802]: 2025-01-30 14:22:36.517 [INFO][6291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7" Jan 30 14:22:36.518393 containerd[1802]: time="2025-01-30T14:22:36.518074870Z" level=info msg="TearDown network for sandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\" successfully" Jan 30 14:22:36.519500 containerd[1802]: time="2025-01-30T14:22:36.519460778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:22:36.519500 containerd[1802]: time="2025-01-30T14:22:36.519487147Z" level=info msg="RemovePodSandbox \"6ded0a096cd869505953c9c7f2bde2c9bb26faceeafd0e2fe75289794d931ee7\" returns successfully" Jan 30 14:22:36.519806 containerd[1802]: time="2025-01-30T14:22:36.519754180Z" level=info msg="StopPodSandbox for \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\"" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.537 [WARNING][6332] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa5a76da-05ef-4313-b8e8-abf8bc713cb3", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315", Pod:"calico-apiserver-5d9cd77888-5f95g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58802ad25f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.538 [INFO][6332] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.538 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" iface="eth0" netns="" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.538 [INFO][6332] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.538 [INFO][6332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.548 [INFO][6346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.548 [INFO][6346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.548 [INFO][6346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.552 [WARNING][6346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.552 [INFO][6346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.553 [INFO][6346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.554540 containerd[1802]: 2025-01-30 14:22:36.553 [INFO][6332] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.554540 containerd[1802]: time="2025-01-30T14:22:36.554490533Z" level=info msg="TearDown network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\" successfully" Jan 30 14:22:36.554540 containerd[1802]: time="2025-01-30T14:22:36.554505264Z" level=info msg="StopPodSandbox for \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\" returns successfully" Jan 30 14:22:36.554906 containerd[1802]: time="2025-01-30T14:22:36.554790653Z" level=info msg="RemovePodSandbox for \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\"" Jan 30 14:22:36.554906 containerd[1802]: time="2025-01-30T14:22:36.554806877Z" level=info msg="Forcibly stopping sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\"" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.572 [WARNING][6375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa5a76da-05ef-4313-b8e8-abf8bc713cb3", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"dd36c6bb980a9384c5c7834f81b4c2cd8d2aef6ab7439cfef96ca8418e6c7315", Pod:"calico-apiserver-5d9cd77888-5f95g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58802ad25f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.572 [INFO][6375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.572 [INFO][6375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" iface="eth0" netns="" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.572 [INFO][6375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.572 [INFO][6375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.582 [INFO][6390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.583 [INFO][6390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.583 [INFO][6390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.586 [WARNING][6390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.586 [INFO][6390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" HandleID="k8s-pod-network.5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--5f95g-eth0" Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.587 [INFO][6390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.588902 containerd[1802]: 2025-01-30 14:22:36.588 [INFO][6375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9" Jan 30 14:22:36.589231 containerd[1802]: time="2025-01-30T14:22:36.588908734Z" level=info msg="TearDown network for sandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\" successfully" Jan 30 14:22:36.590371 containerd[1802]: time="2025-01-30T14:22:36.590302731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:22:36.590371 containerd[1802]: time="2025-01-30T14:22:36.590332605Z" level=info msg="RemovePodSandbox \"5ee4caa22dfeb3fd5e849c98a860e01be01faf9873496f1876704959ccb32dc9\" returns successfully" Jan 30 14:22:36.590604 containerd[1802]: time="2025-01-30T14:22:36.590576816Z" level=info msg="StopPodSandbox for \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\"" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.609 [WARNING][6420] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd528602-7e46-432f-8601-8c9ecb2abf83", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2", Pod:"calico-apiserver-5d9cd77888-nfhxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05e98e4162e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.609 [INFO][6420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.609 [INFO][6420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" iface="eth0" netns="" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.609 [INFO][6420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.609 [INFO][6420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.619 [INFO][6433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.619 [INFO][6433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.619 [INFO][6433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.623 [WARNING][6433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.623 [INFO][6433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.624 [INFO][6433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.625653 containerd[1802]: 2025-01-30 14:22:36.625 [INFO][6420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.625984 containerd[1802]: time="2025-01-30T14:22:36.625681711Z" level=info msg="TearDown network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\" successfully" Jan 30 14:22:36.625984 containerd[1802]: time="2025-01-30T14:22:36.625703901Z" level=info msg="StopPodSandbox for \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\" returns successfully" Jan 30 14:22:36.626020 containerd[1802]: time="2025-01-30T14:22:36.626001778Z" level=info msg="RemovePodSandbox for \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\"" Jan 30 14:22:36.626039 containerd[1802]: time="2025-01-30T14:22:36.626018566Z" level=info msg="Forcibly stopping sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\"" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.644 [WARNING][6463] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0", GenerateName:"calico-apiserver-5d9cd77888-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd528602-7e46-432f-8601-8c9ecb2abf83", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cd77888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"b91d77f2bf296d72b2c4777b323d1ab8e5665a9b456484f890ce0523c466eff2", Pod:"calico-apiserver-5d9cd77888-nfhxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05e98e4162e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.644 [INFO][6463] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.644 [INFO][6463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" iface="eth0" netns="" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.644 [INFO][6463] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.644 [INFO][6463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.655 [INFO][6475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.655 [INFO][6475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.655 [INFO][6475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.659 [WARNING][6475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.659 [INFO][6475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" HandleID="k8s-pod-network.c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-calico--apiserver--5d9cd77888--nfhxt-eth0" Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.660 [INFO][6475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.661951 containerd[1802]: 2025-01-30 14:22:36.661 [INFO][6463] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad" Jan 30 14:22:36.662420 containerd[1802]: time="2025-01-30T14:22:36.661973241Z" level=info msg="TearDown network for sandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\" successfully" Jan 30 14:22:36.664056 containerd[1802]: time="2025-01-30T14:22:36.664017505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:22:36.664056 containerd[1802]: time="2025-01-30T14:22:36.664046235Z" level=info msg="RemovePodSandbox \"c8b6c9b3dfbf55d3c53e1d3c66a2ed296843bdb5074e4d6af3a0c01923feddad\" returns successfully" Jan 30 14:22:36.664326 containerd[1802]: time="2025-01-30T14:22:36.664311831Z" level=info msg="StopPodSandbox for \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\"" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.682 [WARNING][6504] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025d343-9493-4be3-aac1-dde8efb093f7", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945", Pod:"csi-node-driver-rrdfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7033dfeb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.682 [INFO][6504] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.682 [INFO][6504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" iface="eth0" netns="" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.682 [INFO][6504] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.682 [INFO][6504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.694 [INFO][6518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.694 [INFO][6518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.694 [INFO][6518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.697 [WARNING][6518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.697 [INFO][6518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.699 [INFO][6518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.700362 containerd[1802]: 2025-01-30 14:22:36.699 [INFO][6504] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.700687 containerd[1802]: time="2025-01-30T14:22:36.700391051Z" level=info msg="TearDown network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\" successfully" Jan 30 14:22:36.700687 containerd[1802]: time="2025-01-30T14:22:36.700407909Z" level=info msg="StopPodSandbox for \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\" returns successfully" Jan 30 14:22:36.700747 containerd[1802]: time="2025-01-30T14:22:36.700727593Z" level=info msg="RemovePodSandbox for \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\"" Jan 30 14:22:36.700771 containerd[1802]: time="2025-01-30T14:22:36.700757844Z" level=info msg="Forcibly stopping sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\"" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.722 [WARNING][6547] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025d343-9493-4be3-aac1-dde8efb093f7", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"2e022fef42471c11a255530cd10a69c932d3e4337b5dae6e7163e2442d13b945", Pod:"csi-node-driver-rrdfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c7033dfeb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.722 [INFO][6547] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.722 [INFO][6547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" iface="eth0" netns="" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.722 [INFO][6547] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.722 [INFO][6547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.734 [INFO][6564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.734 [INFO][6564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.734 [INFO][6564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.738 [WARNING][6564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.738 [INFO][6564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" HandleID="k8s-pod-network.996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-csi--node--driver--rrdfq-eth0" Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.739 [INFO][6564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.740831 containerd[1802]: 2025-01-30 14:22:36.740 [INFO][6547] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79" Jan 30 14:22:36.740831 containerd[1802]: time="2025-01-30T14:22:36.740810956Z" level=info msg="TearDown network for sandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\" successfully" Jan 30 14:22:36.742241 containerd[1802]: time="2025-01-30T14:22:36.742194028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:22:36.742241 containerd[1802]: time="2025-01-30T14:22:36.742220167Z" level=info msg="RemovePodSandbox \"996a545b68c1706e9183069aaa1f70dd9e56001f630c9d04300ca59f65104c79\" returns successfully" Jan 30 14:22:36.742524 containerd[1802]: time="2025-01-30T14:22:36.742469238Z" level=info msg="StopPodSandbox for \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\"" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.760 [WARNING][6598] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a9af570-d877-46a4-8392-c4f16e337c47", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f", Pod:"coredns-668d6bf9bc-25zb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali312fe6b76af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.760 [INFO][6598] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.760 [INFO][6598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" iface="eth0" netns="" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.760 [INFO][6598] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.760 [INFO][6598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.771 [INFO][6614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.771 [INFO][6614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.771 [INFO][6614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.774 [WARNING][6614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.774 [INFO][6614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.775 [INFO][6614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.776747 containerd[1802]: 2025-01-30 14:22:36.776 [INFO][6598] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.776747 containerd[1802]: time="2025-01-30T14:22:36.776739062Z" level=info msg="TearDown network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\" successfully" Jan 30 14:22:36.776747 containerd[1802]: time="2025-01-30T14:22:36.776753677Z" level=info msg="StopPodSandbox for \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\" returns successfully" Jan 30 14:22:36.777075 containerd[1802]: time="2025-01-30T14:22:36.776983113Z" level=info msg="RemovePodSandbox for \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\"" Jan 30 14:22:36.777075 containerd[1802]: time="2025-01-30T14:22:36.777000270Z" level=info msg="Forcibly stopping sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\"" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.795 [WARNING][6643] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a9af570-d877-46a4-8392-c4f16e337c47", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-b3fea05ed8", ContainerID:"093e7d418163096ab4f4287bf5ad685839979a1cd9fe32143a54a0d3ba964c5f", Pod:"coredns-668d6bf9bc-25zb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali312fe6b76af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.795 [INFO][6643] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.795 [INFO][6643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" iface="eth0" netns="" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.795 [INFO][6643] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.795 [INFO][6643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.805 [INFO][6654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.805 [INFO][6654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.806 [INFO][6654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.809 [WARNING][6654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.809 [INFO][6654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" HandleID="k8s-pod-network.93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Workload="ci--4081.3.0--a--b3fea05ed8-k8s-coredns--668d6bf9bc--25zb4-eth0" Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.810 [INFO][6654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:22:36.812149 containerd[1802]: 2025-01-30 14:22:36.811 [INFO][6643] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50" Jan 30 14:22:36.812149 containerd[1802]: time="2025-01-30T14:22:36.812134289Z" level=info msg="TearDown network for sandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\" successfully" Jan 30 14:22:36.813425 containerd[1802]: time="2025-01-30T14:22:36.813408850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:22:36.813461 containerd[1802]: time="2025-01-30T14:22:36.813435013Z" level=info msg="RemovePodSandbox \"93412e9b64374d2add22cb5acb89e34894467965679bbd1724fc2ba4decc0f50\" returns successfully" Jan 30 14:22:43.128553 kubelet[3054]: I0130 14:22:43.128432 3054 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:25:46.283409 update_engine[1797]: I20250130 14:25:46.283254 1797 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 14:25:46.283409 update_engine[1797]: I20250130 14:25:46.283401 1797 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 14:25:46.284771 update_engine[1797]: I20250130 14:25:46.283797 1797 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 14:25:46.284923 update_engine[1797]: I20250130 14:25:46.284834 1797 omaha_request_params.cc:62] Current group set to lts Jan 30 14:25:46.285237 update_engine[1797]: I20250130 14:25:46.285121 1797 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 14:25:46.285237 update_engine[1797]: I20250130 14:25:46.285171 1797 update_attempter.cc:643] Scheduling an action processor start. Jan 30 14:25:46.285237 update_engine[1797]: I20250130 14:25:46.285228 1797 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:25:46.285627 update_engine[1797]: I20250130 14:25:46.285351 1797 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 14:25:46.285627 update_engine[1797]: I20250130 14:25:46.285514 1797 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:25:46.285627 update_engine[1797]: I20250130 14:25:46.285544 1797 omaha_request_action.cc:272] Request: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: Jan 30 14:25:46.285627 update_engine[1797]: I20250130 14:25:46.285562 1797 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:25:46.286653 locksmithd[1838]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 14:25:46.288172 update_engine[1797]: I20250130 14:25:46.288134 1797 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:25:46.288369 update_engine[1797]: I20250130 14:25:46.288303 1797 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:25:46.289093 update_engine[1797]: E20250130 14:25:46.289072 1797 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:25:46.289131 update_engine[1797]: I20250130 14:25:46.289103 1797 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 14:25:56.190235 update_engine[1797]: I20250130 14:25:56.190062 1797 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:25:56.191234 update_engine[1797]: I20250130 14:25:56.190675 1797 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:25:56.191374 update_engine[1797]: I20250130 14:25:56.191203 1797 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:25:56.192144 update_engine[1797]: E20250130 14:25:56.192019 1797 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:25:56.192352 update_engine[1797]: I20250130 14:25:56.192165 1797 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 14:26:06.193616 update_engine[1797]: I20250130 14:26:06.193447 1797 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:26:06.195431 update_engine[1797]: I20250130 14:26:06.194016 1797 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:26:06.195431 update_engine[1797]: I20250130 14:26:06.194592 1797 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:26:06.195431 update_engine[1797]: E20250130 14:26:06.195295 1797 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:26:06.195431 update_engine[1797]: I20250130 14:26:06.195367 1797 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 14:26:16.192578 update_engine[1797]: I20250130 14:26:16.192430 1797 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:26:16.193556 update_engine[1797]: I20250130 14:26:16.192971 1797 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:26:16.193556 update_engine[1797]: I20250130 14:26:16.193481 1797 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:26:16.194502 update_engine[1797]: E20250130 14:26:16.194394 1797 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:26:16.194725 update_engine[1797]: I20250130 14:26:16.194531 1797 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:26:16.194725 update_engine[1797]: I20250130 14:26:16.194563 1797 omaha_request_action.cc:617] Omaha request response: Jan 30 14:26:16.194942 update_engine[1797]: E20250130 14:26:16.194723 1797 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 14:26:16.194942 update_engine[1797]: I20250130 14:26:16.194777 1797 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 14:26:16.194942 update_engine[1797]: I20250130 14:26:16.194795 1797 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:26:16.194942 update_engine[1797]: I20250130 14:26:16.194811 1797 update_attempter.cc:306] Processing Done. Jan 30 14:26:16.194942 update_engine[1797]: E20250130 14:26:16.194842 1797 update_attempter.cc:619] Update failed. Jan 30 14:26:16.194942 update_engine[1797]: I20250130 14:26:16.194859 1797 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 14:26:16.194942 update_engine[1797]: I20250130 14:26:16.194875 1797 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 14:26:16.194942 update_engine[1797]: I20250130 14:26:16.194891 1797 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 14:26:16.195658 update_engine[1797]: I20250130 14:26:16.195060 1797 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:26:16.195658 update_engine[1797]: I20250130 14:26:16.195148 1797 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:26:16.195658 update_engine[1797]: I20250130 14:26:16.195187 1797 omaha_request_action.cc:272] Request: Jan 30 14:26:16.195658 update_engine[1797]: Jan 30 14:26:16.195658 update_engine[1797]: Jan 30 14:26:16.195658 update_engine[1797]: Jan 30 14:26:16.195658 update_engine[1797]: Jan 30 14:26:16.195658 update_engine[1797]: Jan 30 14:26:16.195658 update_engine[1797]: Jan 30 14:26:16.195658 update_engine[1797]: I20250130 14:26:16.195214 1797 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:26:16.196479 update_engine[1797]: I20250130 14:26:16.195730 1797 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:26:16.196479 update_engine[1797]: I20250130 14:26:16.196168 1797 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:26:16.196661 locksmithd[1838]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 14:26:16.197474 update_engine[1797]: E20250130 14:26:16.197350 1797 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:26:16.197637 update_engine[1797]: I20250130 14:26:16.197485 1797 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:26:16.197637 update_engine[1797]: I20250130 14:26:16.197515 1797 omaha_request_action.cc:617] Omaha request response: Jan 30 14:26:16.197637 update_engine[1797]: I20250130 14:26:16.197535 1797 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:26:16.197637 update_engine[1797]: I20250130 14:26:16.197551 1797 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:26:16.197637 update_engine[1797]: I20250130 14:26:16.197566 1797 update_attempter.cc:306] Processing Done. Jan 30 14:26:16.197637 update_engine[1797]: I20250130 14:26:16.197583 1797 update_attempter.cc:310] Error event sent. Jan 30 14:26:16.197637 update_engine[1797]: I20250130 14:26:16.197607 1797 update_check_scheduler.cc:74] Next update check in 40m3s Jan 30 14:26:16.198405 locksmithd[1838]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 14:28:47.824714 systemd[1]: Started sshd@9-139.178.70.237:22-147.75.109.163:46342.service - OpenSSH per-connection server daemon (147.75.109.163:46342). Jan 30 14:28:47.857092 sshd[7535]: Accepted publickey for core from 147.75.109.163 port 46342 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:28:47.857953 sshd[7535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:47.861125 systemd-logind[1792]: New session 12 of user core. Jan 30 14:28:47.873597 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:28:47.969819 sshd[7535]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:47.971326 systemd[1]: sshd@9-139.178.70.237:22-147.75.109.163:46342.service: Deactivated successfully. Jan 30 14:28:47.972269 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:28:47.973059 systemd-logind[1792]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:28:47.973708 systemd-logind[1792]: Removed session 12. Jan 30 14:28:52.981206 systemd[1]: Started sshd@10-139.178.70.237:22-147.75.109.163:46352.service - OpenSSH per-connection server daemon (147.75.109.163:46352). Jan 30 14:28:53.010874 sshd[7569]: Accepted publickey for core from 147.75.109.163 port 46352 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:28:53.011579 sshd[7569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:53.014126 systemd-logind[1792]: New session 13 of user core. Jan 30 14:28:53.026517 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:28:53.112675 sshd[7569]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:53.114197 systemd[1]: sshd@10-139.178.70.237:22-147.75.109.163:46352.service: Deactivated successfully. Jan 30 14:28:53.115127 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:28:53.115866 systemd-logind[1792]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:28:53.116499 systemd-logind[1792]: Removed session 13. Jan 30 14:28:58.128804 systemd[1]: Started sshd@11-139.178.70.237:22-147.75.109.163:36242.service - OpenSSH per-connection server daemon (147.75.109.163:36242). Jan 30 14:28:58.159700 sshd[7598]: Accepted publickey for core from 147.75.109.163 port 36242 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:28:58.160361 sshd[7598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:58.162825 systemd-logind[1792]: New session 14 of user core. Jan 30 14:28:58.176797 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:28:58.270795 sshd[7598]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:58.297119 systemd[1]: sshd@11-139.178.70.237:22-147.75.109.163:36242.service: Deactivated successfully. Jan 30 14:28:58.301166 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:28:58.304864 systemd-logind[1792]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:28:58.329110 systemd[1]: Started sshd@12-139.178.70.237:22-147.75.109.163:36254.service - OpenSSH per-connection server daemon (147.75.109.163:36254). Jan 30 14:28:58.331750 systemd-logind[1792]: Removed session 14. Jan 30 14:28:58.450536 sshd[7625]: Accepted publickey for core from 147.75.109.163 port 36254 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:28:58.451870 sshd[7625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:58.456227 systemd-logind[1792]: New session 15 of user core. Jan 30 14:28:58.466505 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:28:58.594294 sshd[7625]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:58.614661 systemd[1]: sshd@12-139.178.70.237:22-147.75.109.163:36254.service: Deactivated successfully. Jan 30 14:28:58.619570 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:28:58.623451 systemd-logind[1792]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:28:58.646261 systemd[1]: Started sshd@13-139.178.70.237:22-147.75.109.163:36258.service - OpenSSH per-connection server daemon (147.75.109.163:36258). Jan 30 14:28:58.648919 systemd-logind[1792]: Removed session 15. Jan 30 14:28:58.720596 sshd[7649]: Accepted publickey for core from 147.75.109.163 port 36258 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:28:58.721823 sshd[7649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:28:58.726115 systemd-logind[1792]: New session 16 of user core. Jan 30 14:28:58.744534 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:28:58.829485 sshd[7649]: pam_unix(sshd:session): session closed for user core Jan 30 14:28:58.831583 systemd[1]: sshd@13-139.178.70.237:22-147.75.109.163:36258.service: Deactivated successfully. Jan 30 14:28:58.832529 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:28:58.832958 systemd-logind[1792]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:28:58.833489 systemd-logind[1792]: Removed session 16. Jan 30 14:29:03.848449 systemd[1]: Started sshd@14-139.178.70.237:22-147.75.109.163:36270.service - OpenSSH per-connection server daemon (147.75.109.163:36270). Jan 30 14:29:03.876859 sshd[7710]: Accepted publickey for core from 147.75.109.163 port 36270 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:29:03.877551 sshd[7710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:03.880105 systemd-logind[1792]: New session 17 of user core. Jan 30 14:29:03.891551 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:29:04.016656 sshd[7710]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:04.018138 systemd[1]: sshd@14-139.178.70.237:22-147.75.109.163:36270.service: Deactivated successfully. Jan 30 14:29:04.019024 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:29:04.019801 systemd-logind[1792]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:29:04.020294 systemd-logind[1792]: Removed session 17. Jan 30 14:29:09.038928 systemd[1]: Started sshd@15-139.178.70.237:22-147.75.109.163:54748.service - OpenSSH per-connection server daemon (147.75.109.163:54748). Jan 30 14:29:09.067085 sshd[7737]: Accepted publickey for core from 147.75.109.163 port 54748 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:29:09.067810 sshd[7737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:09.070480 systemd-logind[1792]: New session 18 of user core. Jan 30 14:29:09.081583 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:29:09.167934 sshd[7737]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:09.169588 systemd[1]: sshd@15-139.178.70.237:22-147.75.109.163:54748.service: Deactivated successfully. Jan 30 14:29:09.170520 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:29:09.171175 systemd-logind[1792]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:29:09.171806 systemd-logind[1792]: Removed session 18. Jan 30 14:29:14.188547 systemd[1]: Started sshd@16-139.178.70.237:22-147.75.109.163:54754.service - OpenSSH per-connection server daemon (147.75.109.163:54754). Jan 30 14:29:14.228969 sshd[7785]: Accepted publickey for core from 147.75.109.163 port 54754 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:29:14.229848 sshd[7785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:14.232999 systemd-logind[1792]: New session 19 of user core. Jan 30 14:29:14.246572 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:29:14.334337 sshd[7785]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:14.336008 systemd[1]: sshd@16-139.178.70.237:22-147.75.109.163:54754.service: Deactivated successfully. Jan 30 14:29:14.336954 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:29:14.337698 systemd-logind[1792]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:29:14.338319 systemd-logind[1792]: Removed session 19. Jan 30 14:29:19.344809 systemd[1]: Started sshd@17-139.178.70.237:22-147.75.109.163:45822.service - OpenSSH per-connection server daemon (147.75.109.163:45822). Jan 30 14:29:19.388700 sshd[7831]: Accepted publickey for core from 147.75.109.163 port 45822 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:29:19.389697 sshd[7831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:19.393086 systemd-logind[1792]: New session 20 of user core. Jan 30 14:29:19.416573 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:29:19.546924 sshd[7831]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:19.560265 systemd[1]: sshd@17-139.178.70.237:22-147.75.109.163:45822.service: Deactivated successfully. Jan 30 14:29:19.561132 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:29:19.561875 systemd-logind[1792]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:29:19.562654 systemd[1]: Started sshd@18-139.178.70.237:22-147.75.109.163:45828.service - OpenSSH per-connection server daemon (147.75.109.163:45828). Jan 30 14:29:19.563261 systemd-logind[1792]: Removed session 20. Jan 30 14:29:19.599757 sshd[7857]: Accepted publickey for core from 147.75.109.163 port 45828 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:29:19.600611 sshd[7857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:19.603322 systemd-logind[1792]: New session 21 of user core. Jan 30 14:29:19.609562 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:29:19.791212 sshd[7857]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:19.807018 systemd[1]: sshd@18-139.178.70.237:22-147.75.109.163:45828.service: Deactivated successfully. Jan 30 14:29:19.809090 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:29:19.811157 systemd-logind[1792]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:29:19.813176 systemd[1]: Started sshd@19-139.178.70.237:22-147.75.109.163:45834.service - OpenSSH per-connection server daemon (147.75.109.163:45834). Jan 30 14:29:19.814746 systemd-logind[1792]: Removed session 21. Jan 30 14:29:19.893065 sshd[7882]: Accepted publickey for core from 147.75.109.163 port 45834 ssh2: RSA SHA256:cV+ViCjl4vVxkC08BfT77+E2//9T7/BtXoxxbtaLKjA Jan 30 14:29:19.894088 sshd[7882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:29:19.897834 systemd-logind[1792]: New session 22 of user core. Jan 30 14:29:19.917573 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:29:31.157273 sshd[7882]: pam_unix(sshd:session): session closed for user core Jan 30 14:29:31.158805 systemd[1]: sshd@19-139.178.70.237:22-147.75.109.163:45834.service: Deactivated successfully. Jan 30 14:29:31.159691 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:29:31.160284 systemd-logind[1792]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:29:31.160965 systemd-logind[1792]: Removed session 22.